AI in Credit Decision-Making Is Promising, but Beware of Hidden Biases, Fed Warns – JD Supra

As financial services firms increasingly turn to artificial intelligence (AI), banking regulators warn that despite their astonishing capabilities, these tools must be relied upon with caution.

Last week, the Board of Governors of the Federal Reserve (the Fed) held a virtual AI Academic Symposium to explore the application of AI in the financial services industry. Governor Lael Brainard explained that particularly as financial services become more digitized and shift to web-based platforms, a steadily growing number of financial institutions have relied on machine learning to detect fraud, evaluate credit, and aid in operational risk management, among many other functions.[i]

In the AI world, machine learning refers to a model that processes complex data sets and automatically recognizes patterns and relationships, which are in turn used to make predictions and draw conclusions.[ii] Alternative data is information that is not traditionally used in a particular decision-making process but that populates machine learning algorithms in AI-based systems and thus fuels their outputs.[iii]

Machine learning and alternative data have special utility in the consumer lending context, where these AI applications allow financial firms to determine the creditworthiness of prospective borrowers who lack credit history.[iv] Using alternative data such as the consumers education, job function, property ownership, address stability, rent payment history, and even internet browser history and behavioral informationamong many other datafinancial institutions aim to expand the availability of affordable credit to so-called credit invisibles or unscorables.[v]

Yet, as Brainard cautioned last week, machine-learning AI models can be so complex that even their developers lack visibility into how the models actually classify and process what could amount to thousands of nonlinear data elements.[vi] This obscuring of AI models internal logic, known as the black box problem, raises questions about the reliability and ethics of AI decision-making.[vii]

When using AI machine learning to evaluate access to credit, the opaque and complex data interactions relied upon by AI could result in discrimination by race, or even lead to digital redlining, if not intentionally designed to address this risk.[viii] This can happen, for example, when intricate data interactions containing historical information such as educational background and internet browsing habits become proxies for race, gender, and other protected characteristicsleading to biased algorithms that discriminate.[ix]

Consumer protection laws, among other aspects of the existing regulatory framework, cover AI-related credit decision-making activities to some extent. Still, in light of the rising complexity of AI systems and their potentially inequitable consequences, AI-focused legal reforms may be needed. At this time, to help ensure that financial services are prepared to manage these risks, the Fed has called on stakeholdersfrom financial services firms to consumer advocates and civil rights organizations as well as other businesses and the general publicto provide input on responsible AI use.[x]

[i] Lael Brainard, Governor, Bd. of Governors of the Fed. Reserve Sys., AI Academic Symposium: Supporting Responsible Use of AI and Equitable Outcomes in Financial Services (Jan. 12, 2021), available at https://www.federalreserve.gov/newsevents/speech/brainard20210112a.htm.

[ii] Pratin Vallabhaneni and Margaux Curie, Leveraging AI and Alternative Data in Credit Underwriting: Fair Lending Considerations for Fintechs, 23 No. 4 Fintech L. Rep. NL 1 (2020).

[iii] Id.

[iv] Id.; Brainard, supra n. 1.

[v] Vallabhaneni and Margaux Curie, supra n.2; Kathleen Ryan, The Big Brain in the Black Box, Am. Bar Assoc. (May 2020), https://bankingjournal.aba.com/2020/05/the-big-brain-in-the-black-box/.

[vi] Brainard, supra n.1; Ryan, supra n.5.

[vii] Brainard, supra n.1; Ryan, supra n.5.

[viii] Brainard, supra n.1.

[ix] Id. (citing Carol A. Evans and Westra Miller, From Catalogs to Clicks: The Fair Lending Implications of Targeted, Internet Marketing, Consumer Compliance Outlook (2019)).

[x] Id.

See original here:
AI in Credit Decision-Making Is Promising, but Beware of Hidden Biases, Fed Warns - JD Supra

Related Posts
This entry was posted in $1$s. Bookmark the permalink.