The link between artificial intelligence (AI) and bias is alarming.
As AI evolves to become more human-like, its becoming clear that human bias is impacting technology in negative, potentially dangerous ways.
Here, we explore how AI and bias are linked and whats being done to reduce the impact of bias in AI applications:
See more: The Ethics of Artificial Intelligence (AI)
Using AI in decision-making processes has become commonplace, mostly because predictive analytics algorithms can perform the work of humans at a much faster and often more accurate rate. Decisions are being made by AI on small matters, like restaurant preferences, and critical issues, like determining which patient should receive an organ donation.
While the stakes may differ, whether human bias is playing a role in AI decisions is sure to impact outcomes. Bad product recommendations impact retailer profit, and medical decisions can directly impact individual patient lives.
Vincent C. Mller takes a look at AI and bias in his research paper, Ethics of Artificial Intelligence and Robotics, included in the Summer 2021 edition of The Stanford Encyclopedia of Philosophy. Fairness in policing is a primary concern, Mller says, noting that human bias exists in the data sets used by police to decide, for example, where to focus patrols or which prisoners are likely to re-offend.
This kind of predictive policing, Mller says, relies heavily on data influenced by cognitive biases, especially confirmation bias, even when the bias is implicit and unknown to human programmers.
Christina Pazzanese refers to the work of political philosopher Michael Sandel, a professor of government, in her article, Great promise but potential for peril, in The Harvard Gazette.
Part of the appeal of algorithmic decision-making is that it seems to offer an objective way of overcoming human subjectivity, bias, and prejudice, Sandel says. But we are discovering that many of the algorithms that decide who should get parole, for example, or who should be presented with employment opportunities or housing replicate and embed the biases that already exist in our society.
See more: Artificial Intelligence: Current and Future Trends
To figure out how to remove or at least reduce bias in AI decision-making platforms, we have to consider why it exists in the first place.
Take the AI chatbot training story in 2016. The chatbot was set up by Microsoft to hold conversations on Twitter, interacting with users through tweets and direct messaging. In other words, the general public had a large part in determining the chatbots personality. Within a few hours of its release, the chatbot was replying to users with offensive and racist messages, having been trained on anonymous public data, which was immediately co-opted by a group of people.
The chatbot was heavily influenced in a conscious way, but its often not so clear-cut. In their joint article, What Do We Do About the Biases in AI in the Harvard Business Review, James Manyika, Jake Silberg, and Brittany Presten say that implicit human biases those which people dont realize they hold can significantly impact AI.
Bias can creep into algorithms in several ways, the article says. It can include biased human decisions or reflect historical or social inequities, even if sensitive variables such as gender, race, or sexual orientation are removed. As an example, the researchers point to Amazon, which stopped using a hiring algorithm after finding it favored applications based on words like executed or captured, which were more commonly included on mens resumes.
Flawed data sampling is another concern, the trio writes, when groups are overrepresented or underrepresented in the training data that teaches AI algorithms to make decisions. For example, facial analysis technologies analyzed by MIT researchers Joy Buolamwini and Timnit Gebru had higher error rates for minorities, especially minority women, potentially due to underrepresented training data.
In the McKinsey Global Institute article, Tackling bias in artificial intelligence (and in humans), Jake Silberg and James Manyika lay out six guidelines AI creators can follow to reduce bias in AI:
The researchers acknowledge that these guidelines wont eliminate bias altogether, but when applied consistently, they have the potential to significantly improve on the situation.
See more: Top Performing Artificial Intelligence Companies
Read the original:
Bias in AI: Algorithm Bias in AI Systems to Know About 2021 - Datamation