The Apple Card algo issue: What you need to know about A.I. in everyday life – CNBC

Posted: November 17, 2019 at 2:33 pm

Apple CEO Tim Cook introduces Apple Card during a launch event at Apple headquarters on Monday, March 25, 2019, in Cupertino, California.

Noah Berger | AFP | Getty Images

When tech entrepreneur David Heinmeier Hansson recently took to Twitter saying the Apple Card gave him a credit limit that was 20 times higher than his wife's, despite the fact that she had a higher credit score, it may have been the first major headline about algorithmic bias you read in your everyday life. It was not the first there have been major stories about potential algorithmic bias in child care and insurance and it won't be the last.

The chief technology officer of project management software firm Basecamp, Heinmeier was not the only tech figure speaking out about algorithmic bias and the Apple Card. In fact, Apple's own co-founder Steve Wozniak had a similar experience. Presidential candidate Elizabeth Warren even got in on the action, bashing Apple and Goldman, and regulators said they are launching a probe.

Goldman Sachs, which administers the card for Apple, has denied the allegations of algorithmic gender bias, and has also said it will examine credit evaluations on a case-by-case basis when applicants feel the card's determination is unfair.

Goldman spokesman Patrick Lenihan said algorithmic bias is an important issue, but the Apple Card is not an example of it. "Goldman Sachs has not and will never make decisions based on factors like gender, race, age, sexual orientation or any other legally prohibited factors when determining credit worthiness. There is no 'black box.'" he said, referring to a term often used to describe algorithms. "For credit decisions we make, we can identify which factors from an individual's credit bureau issued credit report or stated income contribute to the outcome. We welcome a discussion of this topic with policymakers and regulators."

As AI and the algorithms that underlie technology become an increasingly large part of everyday life, it's important to know more about the technology. One of the major claims made by technology firms using algorithms in decisions like credit scoring is that algorithms are less biased than human beings. That's being used in areas like job hiring: The state of California recently passed a rule to encourage the development of more job-based algorithms to remove human bias from the hiring process. But it is far from 100% scientifically proven that an AI that relies on code written by humans, as well as data fed into it as a learning mechanism, will not reflect the existing biases of our world.

Here are key points about AI algorithms that will factor in future headlines.

As Hansson and his wife found out, AI systems are becoming more commonplace in areas that everyday people rely on.

This technology is not only being introduced in credit and job hiring but insurance, mortgages and child welfare.

In 2016, Allegheny County, Pennsylvania, introduced a tool called the Allegheny Family Screening Tool. It is a predictive-risk modeling tool that is used to help with child welfare call-screening decisions when concerns of child maltreatment are raised to the county's department of human services.

The system collected data on each person in the referral and uses it to create an "overall family score." That score determines the likelihood of a future event.

Allegheny did face some backlash, but one conclusion was that it created "less bad bias." Other places, including Los Angeles, have used similar technology in an attempt to improve child welfare, and it is an example of how AI systems will be used in ways that can affect people in large ways, and as a result, it is important to know how those systems can be flawed.

Most AI is created from a process called machine learning, which is teaching a computer something by feeding them thousands of pieces of data to help them learn the information of the data set by itself.

An example would be giving an AI system thousands of pictures of dogs, with the purpose of teaching the system what a dog is. From there the system would be able to look at a photo and decide whether it is a dog or not based on that past data.

So what if the data you are feeding a system is 75% golden retrievers and 25% Dalmations?

Postdoctoral researcher at the AI Now Institute, Dr. Sarah Myers West, says these systems are built to reflect the data they are fed, and that data can be built on bias.

"These systems are being trained on data that's reflective of our wider society," West said. "Thus, AI is going to reflect and really amplify back past forms of inequality and discrimination."

One real-world example: While the human manager-based hiring process can undoubtedly be biased, debate remains over whether algorithmic job application technology undoubtedly removes human bias. The AI learning process could incorporate the biases of the data they are fed for example, the resumes of top-performing candidates at top firms.

The AI Now Institute has also found biases in the people who are creating AI systems. In an April 2019 study, they found that only 15% of the AI staff at Facebook are women, and only 4% of their total workforce are black. Google's workforce is even less diverse, with only 10% of their AI staff being women and 2.5% of their workers black.

Joy Buolamwini, a computer scientist at MIT, found during her research on a project that would project digital masks onto a mirror, that the generic facial recognition software she was using would not identify her face unless she used a white colored mask.

She found that her system could not identify the face of a black woman, because the data set it was running on were overwhelmingly lighter-skinned.

"Quite clearly, it's not a solved problem," West said. "It's actually a very real problem that keeps resurfacing in AI systems on a weekly, almost daily basis."

AI algorithms are completely proprietary to the company that created them.

"Researchers face really significant challenges understanding where there's algorithmic bias because so many of them are opaque," West said.

Even if we could see them, it doesn't mean we would understand, says co-director of the Digital Platforms and Democracy Project, and Shorenstein Fellow at Harvard University, Dipayan Ghosh.

"It's difficult to draw any conclusions based on source code," Ghosh said. "Apple's proprietary creditworthiness algorithm is something that not even Apple can easily pin down, and say, 'Okay, here is the code for this,' because it probably involves a lot of different sources of data and a lot of different implementations of code to analyze that data in different siloed areas of the company."

To take things a step further, companies like Apple write their code to be legible to Apple employees, and it may not make sense to those outside of the company.

Right now there is little government oversight of AI systems.

"When AI systems are being used in areas that are of incredible social, political and economic importance, we have a stake in understanding how they are affecting our lives," West said. "We currently don't really have the avenues for the kind of transparency we would need for accountability."

One presidential candidate is trying to change that. New Jersey Senator Cory Booker sponsored a bill earlier this year called "The Algorithmic Accountability Act."

The bill requires companies to look at flawed algorithms that could create unfair or discriminatory situations for Americans. Under the bill, the Federal Trade Commission would be able to create regulations to 'conduct impact assessments of highly sensitive automated decision systems.' That requirement would impact systems under the FTC's jurisdiction, new or existing.

Cory Booker's website's description of the bill directly cites algorithmic malpractice from Facebook and Amazon in the past years.

Booker isn't the first politician to call for better regulation of AI. In 2016, the Obama administration called for development within the industry of algorithmic auditing and external testing of big data systems.

While government oversight is rare, an increasing practice is third-party auditing of algorithms.

The process involves an outside entity coming in and analyzing how the algorithm is made without revealing trade secrets, which is a large reason why algorithms are private.

Ghosh says this is happening more frequently, but not all of the time.

"It happens when companies feel compelled by public opinion or public sway to do something because they don't want to be called out having had no audits whatsoever," Ghosh said.

Ghosh also said that regulatory action can happen, as seen in the FTC's numerous investigations into Google and Facebook. "If a company is shown to harmfully discriminate, then you could have a regulatory agency come in and say 'Hey, we're either going to sue you in court, or you're going to do X,Y and Z. Which one do you want to do?'"

This story has been updated to include a comment from Goldman Sachs that it has not and will never make decisions based on factors like gender, race, age, sexual orientation or any other legally prohibited factors when determining credit worthiness.

Read this article:

The Apple Card algo issue: What you need to know about A.I. in everyday life - CNBC

Related Posts