

Jen Ciarochi
Jen Ciarochi writes (and creates silly illustrations) for Triplebyte’s Compiler blog. Her background is in neuroscience, but she’s known to nerd out about basically any topic that merges science and technology.
“People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.” -Pedro Domingos, The Master Algorithm
First of all, “Biased Bots” is a better description of the current technological landscape than “Racist Robots.” Fortunately, we can deal with biases in AI before robots can actually walk around spouting offensive nonsense. Think of “Racist Robots” as a preventable future dystopian scenario.

In fact, many engineers are optimistic that AI can help identify and even diminish the effects of human prejudices, but the road ahead is convoluted. Studies show that some AI systems disproportionately disadvantage groups that are already marginalized due to ethnicity, gender, or socioeconomic status.
Evidence of bias is, in part, what drove Amazon, Microsoft, and IBM to backpedal on the controversial facial recognition business [1]—catalyzed by mounting pressure following George Floyd’s death at the hands of police officers.
Biased decision-making certainly isn’t unique to AI systems, but in many ways, it is uniquely discoverable in these systems. Biases in AI systems have been detected in law enforcement, banking, insurance, hiring, and healthcare. Most of these issues stem from training data that are not representative (e.g., a data set containing only white people) or are unintentionally embedded with prejudices (e.g., historical hiring data comparing males and females).
Biased data beget biased models, which beget biased data, and so on (notably, under certain conditions, biased decision-making can also result in “algorithmic affirmative action” [2]). Machine learning models can get caught in feedback loops that exacerbate biases. Consider the Strategic Subject List (SSL), which Chicago used from 2012 to late 2019 to identify likely victims or perpetrators of violent crimes [3]. Predictive policing systems, like SSL, often rely on historical crime and arrest data to pinpoint neighborhoods that require more policing.
These predictions become self-fulfilling prophecies when police patrol these areas more, and in turn discover more crimes and make more arrests than they do in less-patrolled areas. As a result, more records from the neighborhood are input into crime data bases, while similar crimes in other areas are overlooked. When the existing model is retrained (or new algorithms are trained) on the updated data, this perpetuates a bias-deepening cycle that can lead to over-policing. SSL was scrutinized by many groups for its reliance on self-generated data—among other issues—and ultimately did not reduce crime. In short, even the most accurate model can’t reduce crime if what it accurately predicts is social injustice.
Bias in AI systems: examples
• COMPAS, a system used by judges to inform decisions about pretrial inmate release, incorrectly flags black defendants as probable repeat offenders nearly twice as often as their white counterparts [4].
• Google adjusted its Google Photos image recognition system after it classified black people as gorillas [5].
• Facial recognition software in Nikon cameras erroneously warned Asian users that they were blinking [6].
• Google Translate debuted gender-specific translations7 after researchers found that it defaults to masculine pronouns (he, him), even when translating texts that specifically refer to females [8].
• The German job portal Xing ranks female candidates below less-qualified male candidates [9].
• Researchers from MIT and Microsoft tested three commercial gender classification systems, and found that the error rate was the highest for darker-skinned women (maximum error rate: 34.7%) and the lowest for lighter-skinned men (maximum error rate: 0.8%) [10].
• Amazon discovered that an internal recruiting tool was discriminating against female job candidates, and traced the problem to the use of historical hiring data—which favors men [11]. After they reprogrammed the model to ignore gendered words, it started making decisions based on gender-correlated words. Ultimately, Amazon scrapped the system altogether, illustrating the difficulty of retroactively eliminating bias.
What makes a model fair?
One difficulty of designing a fair model is defining what exactly constitutes a fair model. There is no consensus on the best definition or mathematical formulation of fairness. Fairness is defined in many ways [12]—and some of these definitions contradict one another. For example, predictive parity (equal fractions of correct positive predictions among groups), equal false positive rates, and equal false negative rates are all definitions of fairness. However, it is mathematically impossible to satisfy all three definitions when there are strong group differences in prevalence (positive outcomes)[13].
Prioritizing bias and fairness metrics
Since there is no consensus on the definition of fairness, a promising approach is to use tools that test for several bias and fairness metrics and, on a case-by-case basis, focus on the metrics that are most appropriate for a given situation.
The first factor to consider is the availability of outcome (i.e., label) data. In other words, a model can be tested for bias after it makes predictions, but before the actual outcomes are known. Unsupervised bias metrics, which are calculated without outcome data, can be used in these cases. Unsupervised metrics are used to assess the distribution of predictions across groups, with common examples including Predicted Positive Rate (PPR) and Predicted Prevalence (PPrev).
A model can also be tested for bias after the outcomes are known, at which point supervised bias metrics can be leveraged. Supervised metrics are error-based, and are calculated using outcomes as well as predictions. Examples of common supervised metrics include False Discovery Rate (FDR), False Omission Rate (FOR), False Positive Rate (FPR), and False Negative Rate (FNR).
The relationship between these fairness metrics and different definitions of fairness is outlined in the table below.

Aside from the availability of outcome data, the appropriateness of a fairness metric essentially depends on whether it's more important to minimize false positives or false negatives. In other words, interventions can be harmful or beneficial—and policymakers want to avoid disproportionately punishing or withholding benefits from particular groups.
If an intervention is costly or harmful, false positives are the primary concern; it is definitely inappropriate to punish someone because they were flagged incorrectly. In these cases, FDR and FPR are particularly important fairness metrics.
On the other hand, if an intervention is beneficial, false negatives are a bigger concern. It is inappropriate to exclude someone who is in need, but assisting someone who isn't in need will not harm them. In these cases, FOR and FNR are very relevant metrics.
These principles are conveniently captured by the “Fairness Tree” below, developed by the creators of the open-source, bias-detecting toolkit Aequitas [14]; this diagram links six common bias and fairness metrics to real-world applications.

Detecting bias: auditing AI
By comparing fairness metrics across groups, AI “auditors” like Aequitas can test machine learning models for bias. Other AI-auditing toolkits include FairTest [15], FairML [16], Google’s ML-fairness-gym [17], and IBM’s AI Fairness 360 [18]. The table below summarizes the terms in the mathematical formulations of six bias and fairness metrics used in AI audits [19].

Below are equations for and graphic representations of these six bias and fairness metrics, along with corresponding Python code (Aequitas source code20 is used as an example). In the Python code, each lambda function includes the following arguments:
- rank_col corresponds to the predictions the model makes.
- label_col contains the actual outcomes, where 0 is negative and 1 is positive.
- thres is a threshold for classification, where any score below the threshold results in positive classification.
- k is a predefined number specifying how many people the model should classify as positive.
Additionally, divide = lambda x, y: x / y if y != 0 else pd.np.nan—which essentially just prevents division with a denominator of 0.
1. Predicted Prevalence (PPrev)

Python Code
# (x[rank_col] <= thres).sum() corresponds to predicted positives (PPg), and adds up all instances
# of rank_col (predictions) that fall within the threshold for positive classification.
# len(x) + 0.0 corresponds to |g|, and returns the number of entities in a group.
predicted_pos_ratio_g = lambda rank_col, label_col, thres, k: lambda x: \
divide((x[rank_col] <= thres).sum(), len(x) + 0.0)
2. Predicted Positive Rate (PPR)

Python Code
# (x[rank_col] <= thres).sum() corresponds to predicted positives (PPg).
# k + 0.0 corresponds to K, and returns the number of entities the model
# predicts are positive (across all groups).
predicted_pos_ratio_k = lambda rank_col, label_col, thres, k: lambda x: \
divide((x[rank_col] <= thres).sum(), k + 0.0)
3. False Discovery Rate (FDR)

Python Code
# ((x[rank_col] <= thres) & (x[label_col] == 0)).sum() corresponds to false positives (FPg),
# and adds up all entities that fall within the threshold for positive classification (rank_col predictions),
# but are actually negatives (i.e., have a label_col value of 0).
# (x[rank_col] <= thres).sum() corresponds to predicted positives (PPg).
fdr = lambda rank_col, label_col, thres, k: lambda x: \
divide(((x[rank_col] <= thres) & (x[label_col] == 0)).sum(),
(x[rank_col] <= thres).sum().astype(float))
4. False Omission Rate (FOR)

Python Code
# ((x[rank_col] > thres) & (x[label_col] == 1)).sum() corresponds to false negatives (FNg),
# and adds up all entities that fall within the negative classification threshold, but are actually positives.
# (x[rank_col] > thres).sum() corresponds to predicted negatives (PNg), and adds up all
# entities that fall within the threshold for negative classification.
fomr = lambda rank_col, label_col, thres, k: lambda x: \
divide(((x[rank_col] > thres) & (x[label_col] == 1)).sum(),
(x[rank_col] > thres).sum().astype(float))
5. False Positive Rate (FPR)

Python Code
# ((x[rank_col] <= thres) & (x[label_col] == 0)).sum() corresponds to false positives (FPg).
# (x[label_col] == 0).sum() corresponds to labeled negatives (LNg),
# and adds up all entities with a label_col value of 0.
fpr = lambda rank_col, label_col, thres, k: lambda x: \
divide(((x[rank_col] <= thres) & (x[label_col] == 0)).sum(),
(x[label_col] == 0).sum().astype(float))
6. False Negative Rate (FNR)

Python Code
# ((x[rank_col] > thres) & (x[label_col] == 1)).sum() corresponds to false negatives (FNg).
# (x[label_col] == 1).sum() corresponds to labeled positives (LPg),
# and adds up all entities with a label_col value of 1.
fnr = lambda rank_col, label_col, thres, k: lambda x: \
divide(((x[rank_col] > thres) & (x[label_col] == 1)).sum(),
(x[label_col] == 1).sum().astype(float))
Bias and fairness disparity measures
Once bias metrics—like Pprev, PPR, FDR, FOR, FPR, and FNR—are calculated for each group, they can be compared to those of a reference group to calculate disparity measures. The reference group can be selected based on different criteria, such as majority status (i.e., the largest population) or historical favoritism.
For example, predicted prevalence disparity is defined as:
$$PPrev_{g}disp = \frac{PPrev_{a_{i}}} {PPrev_{a_{r}}}$$
Similarly, false positive rate disparity is defined as:
$$FPR_{g}disp = \frac{FPR_{a_{i}}} {FPR_{a_{r}}}$$
The disparity metrics can then be tested for fairness against the flexible parameter τ ∈ (0,1] to provide a range of disparity values that are considered fair.
$$
τ ≤ DisparityMeasure_{g_{i}} ≤ \frac {1} {τ}
$$
τ could, for example, be set to 0.8 to adhere to the 80% rule—a threshold set as part of 1970s fair employment legislation to assess adverse impact on minority groups. In essence, this rule states that companies should hire applicants from minority groups at at least 80% of the rate that they hire applicants from non-minority groups. For instance, businesses hiring 50% of male applicants should also hire at least 40% of female applicants.
Example: an Aequitas bias report using COMPAS data
COMPAS is a controversial predictive risk software used across the United States to identify future criminals. ProPublica reported that COMPAS incorrectly labels black defendants as high risk reoffenders at nearly twice the rate of white defendants, while white defendants are much more likely to be incorrectly labeled as low risk reoffenders [21]. ProPublica's COMPAS data [22] includes recidivism risk scores, recidivism outcomes (2-year), and demographic variables from over 7,000 people. Using the same data, does Aequitas reveal similar biases?
In short, yes. COMPAS helps judges make punitive decisions, so FPR and FDR parity are the most relevant bias metrics (i.e., the most unfavorable outcome is unfairly punishing people). When Aequitas audits COMPAS based on these two metrics and only for the race attribute (with Caucasian as the reference group), COMPAS fails the FPR disparity test. As ProPublica reported, the FPR is nearly two times as high for black compared to white defendants. While COMPAS doesn't seem biased against black people based on FDR parity, it still fails the FDR disparity test for race due to disproportionately lower rates for Asians and Native Americans. Note that the value for the reference group (in this case, Caucasians) is 1, because the disparity metrics are based on comparisons with the reference group.

In fact, COMPAS fails based on every fairness metric for race, not just the two most relevant ones. COMPAS also fails all but one test for age bias and half the tests for sex bias.


The model is biased. Now what?
Bias in machine learning systems can be corrected before training, during training, or after the model makes predictions. These methods are briefly introduced here.
Correcting bias before training
Information that can lead to unfair decisions can be removed from the training data before training; this is sometimes called preprocessing. Importantly, preprocessing is not as simple as removing sensitive variables, because other variables can be highly correlated with them; one approach that addresses this issue uses a learning algorithm that finds the best representation of the data while simultaneously obscuring sensitive information (e.g., gender, ethnicity, income) and any information correlated with it [23].
Another example of a preprocessing algorithm is reweighing. Reweighing compensates for bias by assigning lower weights to favored individuals and higher weights to unfavored ones [24]. An imbalanced data set can also be mitigated by resampling, wherein instances of an underrepresented group are added (oversampling), or those of an overrepresented group are removed (undersampling).
Advantages
• The classifier doesn’t need to be modified.
• The preprocessed data can be used for any machine learning task.
Disadvantages
• Other methods often achieve better accuracy and fairness.
Correcting bias during training
Bias can also be corrected during training. One approach uses one or more “fairness” constraints to guide the model and ensure that similar individuals are treated equitably across subpopulations [25]. For example, the optimization objective of the algorithm could include the condition that the false positive rate of the protected group is equal to that of the other individuals in the data set.
Another method is adversarial debiasing, which simultaneously trains two classifiers—a predictor and an adversary [26]. The predictor aims to predict a target variable by minimizing some loss function, while the adversary tries to predict a sensitive variable (given the raw output of the classifier) by minimizing a different loss function. The goal is to get to predictor to minimize the first loss function and maximize the second one, such that the adversary fails to predict the sensitive variable.
Advantages
• Fairness is improved without compromising accuracy.
• The programmer can focus on improving specific fairness metrics.
Disadvantages
• The classifier code must be modified, which is not always feasible.
Correcting bias after training
Finally, bias can be corrected after training by correcting classifier results to improve fairness. This can be accomplished by plotting the true positive rate against the false positive rate to find the threshold at which these rates are equal between protected and unprotected groups; such a plot is called a receiver operating characteristic (ROC) curve.
Advantages
• The classifier doesn’t need to be modified.
• Fairness is improved.
Disadvantages
• There is less flexibility for balancing accuracy and fairness.
• Protected attributes must be accessed during test time.
If all else fails, there’s always the big red button?
The ability to interrogate machine learning systems to uncover bias is incredibly valuable, and we should avail ourselves of the opportunity.
Hopefully, we will eradicate biases from AI long before we arrive at “Racist Robots.” Barring that, researchers at Google DeepMind and Oxford’s “Future of Humanity Institute” have developed a framework involving a big red button that can interrupt wayward AI (as well as prevent said AI from learning how to thwart these interruptions) [27]. As an aside, the future of humanity seems like a heavy, ambitious goal for one institute, so I’m glad they’re at least collaborating.
Have you ever experienced or worked on bias in AI systems? (Maybe you’re a “guerrilla auditor,” like the guy who used a crawler to simulate a recruiter and test resume search engines for gender bias [28]). What do you think are the toughest problems and the most promising solutions? Leave a comment and let us know!
References
- Hale, Kori. “Amazon, Microsoft & IBM Slightly Social Distancing From The $8 Billion Facial Recognition Market,” June 15, 2020. https://www.forbes.com/sites/korihale/2020/06/15/amazon-microsoft--ibm-slightly-social-distancing-from-the-8-billion-facial-recognition-market/?ss=ai.
- Sweeney, Annie, and Jeremy Gorner. “For Years Chicago Police Rated the Risk of Tens of Thousands Being Caught up in Violence. That Controversial Effort Has Quietly Been Ended.” chicagotribune.com. Chicago Tribune, January 25, 2020. https://www.chicagotribune.com/news/criminal-justice/ct-chicago-police-strategic-subject-list-ended-20200125-spn4kjmrxrh4tmktdjckhtox4i-story.html.
- Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. “Machine Bias.” ProPublica, May 23, 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
- Vincent, James. “Google 'Fixed' Its Racist Algorithm by Removing Gorillas from Its Image-Labeling Tech.” The Verge, January 12, 2018. https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai.
- Crawford, Kate. “Artificial Intelligence's White Guy Problem.” The New York Times. The New York Times, June 25, 2016. https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html.
- Wiggers, Kyle. “Google Debuts AI in Google Translate That Addresses Gender Bias.” VentureBeat. VentureBeat, April 22, 2020. https://venturebeat.com/2020/04/22/google-debuts-ai-in-google-translate-that-addresses-gender-bias/.
- “Machine Translation: Analyzing Gender.” Machine Translation | Gendered Innovations. Stanford University. Accessed July 27, 2020. http://genderedinnovations.stanford.edu/case-studies/nlp.html.
- Lahoti, Preethi, Krishna P. Gummadi, and Gerhard Weikum. “IFair: Learning Individually Fair Data Representations for Algorithmic Decision Making.” 2019 IEEE 35th International Conference on Data Engineering (ICDE), 2019. https://doi.org/10.1109/icde.2019.00121.
- Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 2018. http://proceedings.mlr.press/v81/buolamwini18a.html?mod=article_inline.
- Dastin, Jeffrey. “Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women.” Reuters. Thomson Reuters, October 10, 2018. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.
- Narayanan, Arvind. “TL;DS - 21 Fairness Definition and Their Politics by Arvind Narayanan.” TL;DS - 21 fairness definition and their politics , July 19, 2019. https://shubhamjain0594.github.io/post/tlds-arvind-fairness-definitions/.
- Chouldechova, Alexandra. “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments.” Big Data 5, no. 2 (2017): 153–63. https://doi.org/10.1089/big.2016.0047.
- “Aequitas.” Center for Data Science and Public Policy, February 12, 2020. http://www.datasciencepublicpolicy.org/projects/aequitas/.
- “Columbia/Fairtest.” GitHub, May 29, 2017. https://github.com/columbia/fairtest.
- “Adebayoj/Fairml.” GitHub, March 23, 2017. https://github.com/adebayoj/fairml.
- “Google/Ml-Fairness-Gym.” GitHub. Google, June 17, 2020. https://github.com/google/ml-fairness-gym.
- “AI Fairness 360 Open Source Toolkit.” AI Fairness 360. IBM, n.d. http://aif360.mybluemix.net/.
- Saleiro, Pedro, Benedict Kuester, Loren Hinkson, Jesse London, Abby Stevens, Ari Anisfeld, Kit T. Rodolfa, and Rayid Ghani. “Aequitas: A Bias and Fairness Audit Toolkit.” arXiv.org, April 29, 2019. https://arxiv.org/abs/1811.05577.
- “Source Code for Src.aequitas.group.” src.aequitas.group - aequitas documentation, 2018. https://dssg.github.io/aequitas/_modules/src/aequitas/group.html.
- “Propublica/Compas-Analysis.” GitHub. Propublica, 2017. https://github.com/propublica/compas-analysis/.
- Zemel, Richard, Yu Wu, Kevin Swersky, Toniann Pitassi, and Cynthia Dwork. “Learning Fair Representations.” ICML’13: Proceedings of the 30th International Conference on Machine Learning, 2013. https://arxiv.org/pdf/1904.13341.pdf
- Kamiran, Faisal, and Toon Calders. “Data Preprocessing Techniques for Classification without Discrimination.” Knowledge and Information Systems 33, no. 1 (2011): 1–33. https://doi.org/10.1007/s10115-011-0463-8.
- Dwork, Cynthia, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. “Fairness through Awareness.” Proceedings of the 3rd Innovations in Theoretical Computer Science Conference on - ITCS '12, 2012. https://doi.org/10.1145/2090236.2090255.
- Zhang, Brian Hu, Blake Lemoine, and Margaret Mitchell. “Mitigating Unwanted Biases with Adversarial Learning.” Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 2018. https://doi.org/10.1145/3278721.3278779.
- Orseau, Laurent, and Stuart Armstrong. “Safely Interruptible Agents.” DeepMind. Google DeepMind, 2016. https://deepmind.com/research/publications/safely-interruptible-agents.
- Chen, Le, Ruijun Ma, Anikó Hannák, and Christo Wilson. “Investigating the Impact of Gender on Rank in Resume Search Engines.” Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI '18, 2018. https://doi.org/10.1145/3173574.3174225.
About Triplebyte
Triplebyte helps engineers find great jobs by assessing their abilities, not by relying on the prestige of their resume credentials. Take our 30 minute multiple-choice coding quiz to connect with your next big opportunity and join our community of 200,000+ engineers.