How Fighting AI Bias Can Make Fintech Even More Inclusive – InformationWeek

Posted: February 5, 2022 at 5:09 am

A key selling point for emerging fintech is the potential to expand financial access to more people -- but there is a potential for biases built into the technology to do the opposite.

The rise of online lenders, digital-first de novo banks, digital currency, and decentralized finance speaks to a desire for greater flexibility and participation in the money-driven world. While it might be possible to use such resources to better serve unbanked and underbanked segments of the population, how the underlying tech is encoded and structured might cut off or impair access for certain demographics.

Sergio Suarez Jr., CEO and founder of TackleAI, says when machine learning or AI is deployed to look for patterns and there is a history of marginalizing certain people, the marginalization effectively becomes data. TackleAI is a developer of an AI platform for detecting critical information in unstructured data and documents. If the AI is learning from historical data and historically, weve been not so fair to certain groups, thats what the AI is going to learn, he says. Not only learn it but reinforce itself.

Fintech has the potential to improve efficiency and democratization of economic access. Machine learning models, for example, have sped up the lending industry, shortening days and weeks down to seconds to figure out mortgages or interest rates, Suarez says. The issue, he says, is that certain demographics have historically been charged higher interest rates even if they met same criteria as another group. Those biases will continue, Suarez says, as the AI repeats such decisions.

Essentially the technology regurgitates the biases that people have held because that is what the data shows. For example, AI might detect names of specific ethnicities and then use that to categorize and assign unfavorable attributes to such names. This might influence credit scores or eligibility for loans and credit. When my wife and I got married, she went from a very Polish last name to a Mexican last name, Suarez says. Three months later, her credit score was 12 points lower. He says credit score companies have not revealed how precisely the scores were calculated, but the only material change was a new last name.

Structural factors with legacy code can also be an issue, Suarez says. For instance, code from the 1980s and early 1990s tended to treat hyphenations, apostrophes, or accent marks as foreign characters, he says, which gummed up the works. That can be problematic when AI built around such code tries to deal with people or institutions that have non-English names. If its looking at historical data its really neglecting years, sometimes decades worth of information, because it will try to sanitize the data before it goes into these models, Suarez says. Part of the temptation process is to get rid of things that look like garbage or difficult things to recognize.

An essential factor in dealing with possible bias in AI is to acknowledge that there are segments of the population that have been denied certain access for years, he says, and make access truly equal. We cant just continue to do the same things that weve been doing because well reinforce the same behavior that weve had for decades, Suarez says.

More often than not, he says, developers of algorithms and other elements that drive machine learning and AI do not plan in advance to ensure their code does not repeat historical biases. Mostly you have to write patches later.

Amazon, for example, had a now-scrapped AI recruiting tool that Suarez says gave much higher preference to men in hiring because historically the company hired more men despite women applying for the same jobs. That bias was patched and resolved, he says, but other concerns remain. These machine learning models -- no one really knows what theyre doing.

That brings into question how AI in fintech might decide loan interest rates are higher or lower for individuals. It finds its own patterns and it would take us way too much processing power to unravel why its coming to those conclusions, Suarez says.

Institutional patterns can also disparagingly affect people with limited income, he says, with fees for low balances and overdrafts. People who were poor end up staying poor, Suarez says. If we have machine learning algorithms mimic what weve been doing that will continue forward. He says machine learning models in fintech should be given rules ahead of time such as not using an individuals race as a data point for setting loan rates.

Organizations may want to be more cognizant of these issues in fintech, yet shortsighted practices in assembling developers to work on the matter can stymie such attempts. The teams that are being put together to work on these machine learning algorithms need to be diverse, Suarez says. If were going to be building algorithms and machine learning models that reflect an entire population, then we should have the people building it also represent the population.

Fintechs Future Through the Eyes of CES

PayPal CEO Discusses Responsible Innovation at DC Fintech

DC Fintech Week Tackles Financial Inclusivity

Read the rest here:

How Fighting AI Bias Can Make Fintech Even More Inclusive - InformationWeek

Related Posts