The Problem With Biased AIs (and How To Make AI Better) – Forbes

Posted: October 4, 2022 at 1:21 pm

AI has the potential to deliver enormous business value for organizations, and its adoption has been sped up by the data-related challenges of the pandemic. Forrester estimates that almost 100% of organizations will be using AI by 2025, and the artificial intelligence software market will reach $37 billion by the same year.

The Problem With Biased AIs (and How To Make AI Better)

But there is growing concern around AI bias situations where AI makes decisions that are systematically unfair to particular groups of people. Researchers have found that AI bias has the potential to cause real harm.

I recently had the chance to speak with Ted Kwartler, VP of Trusted AI at DataRobot, to get his thoughts on how AI bias occurs and what companies can do to make sure their models are fair.

AI bias occurs because human beings choose the data that algorithms use, and also decide how the results of those algorithms will be applied. Without extensive testing and diverse teams, it is easy for unconscious biases to enter machine learning models. Then AI systems automate and perpetuate those biased models.

For example, a US Department of Commerce study found that facial recognition AI often misidentifies people of color. If law enforcement uses facial recognition tools, this bias could lead to wrongful arrests of people of color.

Several mortgage algorithms in financial services companies have also consistently charged Latino and Black borrowers higher interest rates, according to a study by UC Berkeley.

Kwartler says the business impact of biased AI can be substantial, particularly in regulated industries. Any missteps can result in fines, or could risk a companys reputation. Companies that need to attract customers must find ways to put AI models into production in a thoughtful way, as well as test their programs to identify potential bias.

Kwartler says good AI is a multidimensional effort across four distinct personas:

AI Innovators: Leaders or executives who understand the business and realize that machine learning can help solve problems for their organization

AI Creators: The machine learning engineers and data scientists who build the models

AI Implementers: Team members who fit AI into existing tech stacks and put it into production

AI Consumers: The people who use and monitor AI, including legal and compliance teams who handle risk management

When we work with clients, Kwartler says, we try to identify those personas at the company and articulate risks to each one of those personas a little bit differently, so they can earn trust.

Kwartler also talks about why "humble AI" is critical. AI models must demonstrate humility when making predictions, so they don't drift into the biased territory.

Kwartler told VentureBeat, "If I'm classifying an ad banner at 50% probability or 99% probability, that's kind of that middle range. You have one single cutoff threshold above this line, and you have one outcome. Below this line, you have another outcome. In reality, we're saying there's a space in between where you can apply some caveats, so a human has to go review it. We call that humble AI in the sense that the algorithm is demonstrating humility when it's making that prediction."

According to DataRobots State of AI Bias report, 81% of business leaders want government regulation to define and prevent AI bias.

Kwartler believes that thoughtful regulation could clear up a lot of ambiguity and allow companies to move forward and step into the enormous potential of AI. Regulations are particularly critical around high-risk use cases like education recommendations, credit, employment, and surveillance.

Regulation is essential for protecting consumers as more companies embed AI into their products, services, decision-making, and processes.

When I asked Kwartler for his top tips for organizations that want to create unbiased AI, he had several suggestions.

The first recommendation is to educate your data scientists about what responsible AI looks like, and how your organizational values should be embedded into the model itself, or the guardrails of the model.

Additionally, he recommends transparency with consumers, to help people understand how algorithms create predictions and make decisions. One of the ongoing challenges of AI is that it is seen as a black box, where consumers can see inputs and outputs, but have no knowledge of the AIs internal workings. Companies need to strive for explainability, so people can understand how AI works and how it might have an impact.

Lastly, he recommends companies establish a grievance process for individuals, to give people a way to have discussions with companies if they feel they have been treated unjustly.

I asked Kwartler for his hopes and predictions for the future of AI, and he said that he believes AI can help us solve some of the biggest problems human beings are currently facing, including climate change.

He shared a story of one of DataRobots clients, a cement manufacturer, who used a complex AI model to make one of their plants 1% more efficient, helping the plant save approximately 70,000 tons of carbon emissions every year.

But to reach the full potential of AI, we need to ensure that we work toward reducing bias and the possible risks AI can bring.

To stay on top of the latest on the latest trends in data, business and technology, check out my books Data Strategy: How To Profit From A World Of Big Data, Analytics And Artificial Intelligence, and make sure you subscribe to my newsletter and follow me on Twitter, LinkedIn, and YouTube.

Original post:

The Problem With Biased AIs (and How To Make AI Better) - Forbes

Related Posts