To quote Andrew McAfee of MIT, “If you want the bias out, get the algorithms in.”Īt the same time, extensive evidence suggests that AI models can embed human and societal biases and deploy them at scale.
Unlike human decisions, decisions made by AI could in principle (and increasingly in practice) be opened up, examined, and interrogated. Another study found that automated financial underwriting systems particularly benefit historically underserved applicants. For example, Jon Kleinberg and others have shown that algorithms could help reduce racial disparities in the criminal justice system. In addition, some evidence shows that algorithms can improve decision making, causing it to become fairer in the process. In many cases, AI can reduce humans’ subjective interpretation of data, because machine learning algorithms learn to consider only the variables that improve their predictive accuracy, based on the training data used. Human decisions are also difficult to probe or review: people may lie about the factors they considered, or may not understand the factors that influenced their thinking, leaving room for unconscious bias.
For example, employers may review prospective employees’ credit histories in ways that can hurt minority groups, even though a definitive link between credit history and on-the-job behavior has not been established. Humans are also prone to misapplying information. Some researchers have highlighted how judges’ decisions can be unconsciously influenced by their own personal characteristics, while employers have been shown to grant interviews at different rates to candidates with identical resumes but with names considered to reflect different racial groups.
AI can help reduce bias, but it can also bake in and scale biasīiases in how humans make decisions are well documented. Realizing these opportunities will require collaboration across disciplines to further develop and implement technical improvements, operational practices, and ethical standards. The second is the opportunity to improve AI systems themselves, from how they leverage data to how they are developed, deployed, and used, to prevent them from perpetuating human and societal biases or creating bias and related challenges of their own. The first is the opportunity to use AI to identify and reduce the effect of human biases.
Two opportunities present themselves in the debate. Will AI’s decisions be less biased than human ones? Or will AI make these problems worse? This article, a shorter version of that piece, also highlights some of the research underway to address the challenges of bias in AI and suggests six pragmatic ways forward. In, Notes from the AI frontier: Tackling bias in AI (and in humans) (PDF–120KB), we provide an overview of where algorithms can help reduce disparities caused by human biases, and of where more human vigilance is needed to critically analyze the unfair biases that can become baked in and scaled by AI systems.