
Data Violence: How Bias in Machine Learning Systems Affect Society
Table of Contents
Introduction
In most cases, Artificial Intelligence (AI) and Machine Learning (ML) is uncontroversial. However, when you include sensitive social features in a ML system, it opens a dearth of ethical, moral and social issues. If an ML system is asked to predict items to place in your shopping cart based on your specific prior shopping history, or how to win at chess, then few people would argue with the means and methods used to attain those goals. What do we do when we are asked to base predictions on protected attributes according to anti-discrimination laws?
Said differently, how do we make sure that we do not embed racist, sexist, or other potential biases into our algorithms, be it explicitly or implicitly?
It may not surprise you that there have been several important lawsuits in the United States on this topic, possibly the most notably one involving Northpointe’s controversial COMPAS — Correctional Offender Management Profiling for Alternative Sanctions — software, which predicts the risk that a defendant will commit another crime. The proprietary algorithm considers some of the answers from a 137-item questionnaire to predict this risk.

In February 2013, Eric Loomis was found driving a car that had been used in a shooting. He was arrested and pleaded guilty to eluding an officer. In determining his sentence, a judge looked not just to his criminal record, but also to a score assigned by a tool called COMPAS.
COMPAS is one of several risk-assessment algorithms now used around the United States to predict hot spots of violent crime, determine the types of supervision that inmates might need, or — as in Loomis’s case — provide information that might be useful in sentencing. COMPAS classified him as high-risk of re-offending, and Loomis was sentenced to six years.
He appealed the ruling on the grounds that the judge, in considering the outcome of an algorithm whose inner workings were secretive and could not be examined, violated due process. The appeal went up to the Wisconsin Supreme Court, who ruled against Loomis, noting that the sentence would have been the same had COMPAS never been consulted. Their ruling, however, urged caution and skepticism in the algorithm’s use.
The case, understandably, caused quite a stir in the machine learning community — I doubt anyone would want to be judged by an algorithm, after all, you cannot blame an algorithm for being unethical, can you?
Why This is an Issue
After several more controversial results were spat out of the algorithm and scrutinized, it once again drew the public’s eye. Then U.S. Attorney General Eric Holder warned that the risk scores might be injecting bias into the courts. He called for the U.S. Sentencing Commission to study their use. “Although these measures were crafted with the best of intentions, I am concerned that they inadvertently undermine our efforts to ensure individualized and equal justice,” he said, adding, “they may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.”
The sentencing commission did not, however, launch a study of risk scores. So ProPublica did, as part of a larger examination of the powerful, largely hidden effect of algorithms in American life.
ProPublica’s examination came up with some interesting conclusions. Not only was the algorithm absurdly inaccurate (less than 20% of its predictions were true), but it also showed significant racial disparities, just as Holder feared. In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways.
- The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.
- White defendants were mislabeled as low risk more often than black defendants.
Black defendants were still 77 percent more likely to be pegged as at higher risk of committing a future violent crime and 45 percent more likely to be predicted to commit a future crime of any kind.
This may sound bad, but there is more to this story than meets the eye. Depending on the way we analyze this, we can find that the algorithm is both racist and not racist, it depends on the way we define ‘equality’ within our model. For this reason, it is critical to have a common understanding of ‘equality” to help us understand if any model was actually designed in an acceptable way, but can still produce purportedly racist results.

Types of Discrimination

We first need to define the types of discrimination that are possible in algorithms, and what kind we are dealing with in our previous examples. There are two forms of discrimination that we will refer to as disparate impact and disparate treatment.
Disparate Treatment — Involves classifying someone in an impermissible way. It involves the intent to discriminate, evidenced by explicit reference to group membership.
Disparate Impact — Looks at the consequences of classification/decision making on certain groups. No intent is required and it is facially neutral.
Disparate impact is often referred to as unintentional discrimination, whereas disparate treatment is intentional.
Practices with a disproportionate impact on a particular group are deemed by the Supreme Court to not cause disparate impact if they are “grounded in sound business considerations.”
It is possible to cause disparate treatment or disparate impact when considering any of the following protected attributes: Age, Disability, National Origin, Race/color, Religion, Sex.
All of these attributes can be used as features in our machine learning algorithms, and thus our algorithms have the potential to discriminate on the basis of these attributes. Some common examples of this are facial recognition, recidivism (as previously discussed), and hiring. What can we do to help combat this?
Combating disparate treatment is easy. Explicit discriminatory bias makes classification less accurate, so there is no good reason to do it. However, what about when discrimination is embedded in the historical data? Or the attributes are a result of past social injustices that still persist somewhat to this day?
Discriminatory Bias in Training Data
Discrimination impacts social goods when classification and decision making is based on inaccurate information (for example, thinking that everyone over 7ft is a bad babysitter). These ideas are often perpetuated by human biases, and become embedded in data that is used to train algorithms.
In this case, the biases of humans are not mitigated by the machine learning algorithm. In fact, they are reproduced in the classifications that are made. Why does this happen? Recidivism scores such as those made by the Northpointe software are based on prior arrests, age of first police contact, parents’ incarceration record. This information is shaped by biases in the world (such as from cultural values and nationalism) and injustices more generally (such as racial prejudices).
This bias is also present in natural language processing, which focuses on textual data. A good example of this is the research paper titled “Man is to Computer Programmer as Women is to Homemaker? Debiasing Word Embeddings”, which showed automatically generated analogies from the software’s vectors, such as man → computer-programmer, and women → homemaker. These reflect sexism in the original texts.
More generally, these sources of bias generally arise from:
- Over- and under-sampling
- Skewed sample
- Feature choice/limited features
- Proxies/redundant encodings
- Biases and injustices in the world
So how do we remove these biases? Machine learning algorithms can perpetuate discrimination because they are trained on biased data. The solution is to identify or generate an unbiased dataset from which to draw accurate generalizations.
Removing Bias from Machine Learning Models
Characteristics such as race, gender, and socio-economic class determine other features about us that are relevant to the outcome of some performance tasks. These are protected attributes, but they’re still relevant to certain performance tasks — and performance tasks that are putative social goods at that. For example:
- Average wealth for white families is seven times higher than average wealth for black families.
- Wealth is relevant for whether you can pay back a loan.
- Differences in wealth are determined by historical and present injustice.
Machine learning is, by nature, historical. To effectively combat discrimination, we need to change these patterns. Machine learning, however, reinforces these patterns. Machine learning may therefore be part of the problem.
“Even if history is an arc that bends towards justice, machine learning doesn’t bend.” —Ernesto Lee
So where do we go from here? Are we doomed to have racist and sexist algorithms?
Even when we optimize for accuracy, machine learning algorithms may perpetuate discrimination even when we work from an unbiased data set and have a performance task that has social goods in mind. What else could we do?
- Sequential learning
- More theory
- Causal modeling
- Optimizing for fairness
Of all of these, optimizing for fairness seems like the easiest and the best course of action. In the next section, we will outline how to optimize a model for fairness.
How Do We Optimize Fairness?
Building machine learning algorithms that optimize for non-discrimination can be done in 4 ways:
- Formalizing a non-discrimination criterion
- Demographic parity
- Equalized odds
- Well-calibrated systems
We will discuss each of these in turn.
Formalizing a non-discrimination criterion is essentially what the other 3 approaches involve, they are types of criterion which aim to formalize a non-discrimination criterion. However, this list is not exhaustive and there may be better approaches that have not been proposed yet.
Demographic parity proposes that the decision (the target variable) should be independent of protected attributes — race, gender etc. are irrelevant to the decision.
For a binary decision Y and protected attribute A:
P(Y=1 ∣ A=0) = P(Y=1∣A=1)
The probability of some decision being made (Y=1) should be the same, regardless of the protected attribute (whether (A=1) or (A=0)). However, demographic parity rules out using the perfect predictor C=Y, where C is the predictor and Y the target variable.
To understand the objection, consider the following case. Say that we want to predict whether an individual will purchase organic shampoo. Whether members of certain groups purchase organic shampoo is not independent of their membership of that group. But, demographic parity would rule out using the perfect predictor. So perhaps this is not the best procedure, maybe the others will give us a better result?
Equalized odds propose that the predictor and the protected attribute should be independent, conditional on the outcome. For the predictor R, outcome Y, and protected attribute A, where all three are binary variables:
P(R=1|A=0, Y=1) = P(R=1|A=1, Y=1).
The attribute (whether (A=1) or (A=0)) should not change your estimation (P) of how likely it is that some relevant predictor (R=1) holds true of the candidate. Instead, the outcome (of some decision) (Y=1) should. An advantage of this procedure is that this is compatible with the ideal predictor, R=Y.
Consider the following case involving a student getting accepted into Yale, given that they were the valedictorian in their high school. Equalized odds posits that knowing whether the student is gay does not change the probability of whether the student was valedictorian.
Predictor R = Whether you were high school valedictorian (1) or not (0)
Outcome Y = Getting into Yale (1) or not (0)
Attribute A = Being gay (1), being straight (0)
P(R=1| A=0, Y=1) = P(R=1| A=1, Y=1).
Well-calibrated systems propose that the outcome and protected attribute are independent, conditional on the predictor. For the predictor R, outcome Y, and protected attribute A, where all three are binary variables:
P(Y=1|A=0, R=1) = P(Y=1|A=1, R=1)
The probability of some outcome occurring (Y=1) should be unaffected by some protected attribute (whether (A=0) or (A=1)), and instead should be conditional on the relevant predictor (R=1). This formulation has the advantage that it is group unaware — it holds everyone to the same standard.
Contrasting this with our previous example, knowing that the student is gay does not change the probability of whether the student got into Yale. The distinction between equalized odds and well-calibrated systems is subtle, but important.
In fact, this difference is the basis of the disagreement about the COMPAS software we discussed in the beginning.

Is the Algorithm Racist?

Equalized odds and well-calibrated systems are mutually incompatible standards. Sometimes, given certain empirical circumstances, we cannot have a system be both well-calibrated and equalize the odds. Let’s look at this fact in the context of the debate between ProPublica and Northpointe about whether COMPAS is biased against black defendants.
Y = whether the defendant will reoffend
A = race of the defendant
R = recidivism predictor used by COMPAS
Northpointe’s defense: COMPAS is well-calibrated, i.e.,
P(Y=1|A=0, R=1) = P(Y=1|A=1, R=1).
The COMPAS system makes roughly similar recidivism predictions for defendants, regardless of their race.
The Problem With the Fox Guarding the Hen House
COMPAS has a higher false positive rate for black defendants and a higher false negative rate for white defendants, i.e., does not satisfy equalized odds:
P(R=1|A=0, Y=1) ≠P(R=1|A=1, Y=1)
The race of the defendant makes a difference to whether the individual is placed in the low or the medium/high-risk category. Whether (A=0) or (A=1) makes a difference to the probability that COMPAS has identified some recidivism risk predictor will hold of the defendant (P(R=1)), and not just whether the defendant will/won’t reoffend (Y=1).

So What Now?
When certain empirical facts hold, our ability to have a well-calibrated and an odds-equalizing system breaks down. It seems that what’s generating the problem is something we discussed earlier: background facts created by injustice. For example, higher rates of being caught re-offending due to higher police scrutiny.
It’s hard to figure out when certain fairness criteria should apply. If some criterion didn’t come at a cost to the others, then you would worry less about applying one when you’re uncertain. But, since this isn’t the case, we need to understand the impact of failing to meet at some criteria.
So, which of our discussed criteria are the best to choose? All of these approaches have promising features, but all have their drawbacks.
So, what now?
We cannot section off fairness in one little corner without fighting to change injustices in the world and discrimination that happens outside of machine learning systems. This doesn’t mean we can’t do anything! We must set some standards for fairness in certain domains, while at the same time striving to change base rates.
Despite several controversies and its unpopularity amongst some, the COMPAS software continues to be used to this day. No one who develops an algorithm wants to be accused or imprisoned for unknowingly developing a racist algorithm, but some criteria must be selected to base predictions of in situations like that which COMPAS tries to tackle.
It may be an algorithm, and it may not be perfect, but it is a start, and one has to start somewhere.
Can Machine Learning Help Fight Discrimination?
Machine learning is an extremely powerful tool. This is increasingly clear as humanity begins to transition from humanist to dataist perspectives— where we begin to trust algorithms and data more than people or our own thoughts (some people have driven into lakes because their GPS told them too!). This makes it extremely important that we try to make algorithms as unbiased as possible so that they do not unknowingly perpetuate social injustices that are embedded in historical data. However, there is also a huge potential to use algorithms to make a more just and equal society. A good example of this is in the hiring process.
Say you are applying for your dream job and are in the final stage of the interview process. The hiring manager has the power to determine whether you are hired or not. Would you like an unbiased algorithm to decide whether you are the best person for the job?
Would you still prefer this if you knew that the hiring manager was racist? Or sexist?
Perhaps the hiring manager is a very neutral person and is basing the job purely on merit, however, everyone has their own proclivities and underlying cognitive biases that may make them more likely to select the candidate they like the most, as opposed to the person best for the job.
If unbiased algorithms can be developed, the hiring process could become faster and less expensive, and their data could lead recruiters to more highly skilled people who are better matches for their companies. Another potential result: a more diverse workplace. The software relies on data to surface candidates from a wide variety of places and match their skills to the job requirements, free of human biases.
This may not be the perfect solution, in fact, there is rarely a perfect answer when it comes to justice. However, the arc of history appears to tend towards justice, so perhaps this will give justice another step forwards.
Another good example of this is automatic loan underwriting. Compared with traditional manual underwriting, automated underwriting more accurately predicts whether someone will default on a loan, and its greater accuracy results in higher borrower approval rates, especially for underserved applicants. The upshot of this is that sometimes machine learning algorithms do a better job than we would at making the most accurate classifications, and sometimes this combats discrimination in domains like hiring and credit approval.
Final Thoughts
To end such a long and serious article, I leave you with a quote from Google about discrimination in machine learning to mull over.
“Optimizing for equal opportunity is just one of many tools that can be used to improve machine learning systems — and mathematics alone is unlikely to lead to the best solutions. Attacking discrimination in machine learning will ultimately require a careful, multidisciplinary approach.” — Google
References
References
[1] O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, 2016.
[2] Garvie, Claire; Frankle, Jonathan. Facial-Recognition Software Might Have a Racial Bias Problem. The Atlantic, 2016.
[3] Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam Kalai. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. Arxiv, 2016.