As big data transforms our businesses, governments and society, it also presents us with new moral and ethical dilemmas that we need to consider. As is typical with new technology, we often tend to implement first, and consider the ethical issues later. Cathy O’Neil’s book Weapons of Math Destruction is an introduction to the ethical issues raised by the widespread use of data to drive decisions in our lives.
Yet I saw trouble. The math-powered applications powering the data economy were based on choices made by fallible human beings. Some of these choices were no doubt made with the best intentions. Nevertheless, many of these models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed our lives. Like gods, these mathematical models were opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists. Their verdicts, even when wrong or harmful, were beyond dispute or appeal.
What is a weapon of math destruction?
O`Neil starts the book by providing a definition for weapons of math destruction. She describes them as models that have three defining characteristics: opacity, scale, and damage.
Opacity
A destructive model is opaque, you don`t know what information is being used or what logic is being implemented to judge you, you only see the result – a `yes` or a `no`. Opacity has the potential to go two ways, not only do the people affected by the model not know about how the decisions where made, but poorly built models also may not include feedback mechanisms for their owners so they can`t be reviewed or optimized.
Scale
Models which apply to large groups of people and / or are used for multiple (sometimes unrelated) decisions in a person`s life have a higher risk factor and are more likely to become weapons of math destruction. Scale also comes into play when models are used to increase efficiency in decision making at the expense of fairness.
Damage
Lastly, in order for a model to qualify was a weapon it needs to have the potential to cause damage. This is a broad definition, but generally damaging models have the potential to impact a person’s physical or mental health, finances, safety, or employment. Often the damage combines with scale to create situations where the model is self-perpetuating, the decisions made once you’ve been classified make it more likely for you to continue being classified that way in the future.
Where can you find weapons of math destruction?
Having set up her definition, O`Neil spends the balance of the book applying it to models from healthcare, advertising, finance, employment, insurance, and the justice system. Here she shows how many models, despite being created with the best of intentions, have latent side effects that cause damage both to individual people and society as a whole. These examples help provide context for her argument, show the real impact that poorly built / implemented models have on people, and also give the reader an education in identifying potentially troubling aspects of models.
What can we do?
The book ends with a brief chapter in which O`Neil provides three remedies to defuse weapons of math destruction: transparency, disclosure, and audit. These remedies counteract the defining characteristics of bad models and provide people with a tools they need to understand how they are being evaluated, the data which is being used, and an avenue for appeal when they disagree with a decision which was enabled by a model.
O`Neil’s book provides a good grounding in the new ethical challenges and considerations in a world awash in big data. She does a good job describing the characteristics of poor models and the impact on people, but could have gone into greater depth on how to remedy or prevent these issues. Even with the comparatively little emphasis on best practises, Weapons of Math Destruction provides an excellent overview of the challenges we face and issues we need to be aware of.
Leave a Reply