Regulators Want to Know How Financial Institutions Use AI and How They’re Mitigating Risks – Nextgov

Posted: March 31, 2021 at 5:41 am

A group of federal financial regulators says they know U.S. financial institutions are using artificial intelligence but wants more information on where the technology is being deployed and how those organizations are accounting for the risks involved.

The financial sector is using forms of AIincluding machine learning and natural language processingto automate rote tasks and spot trends humans might miss. But new technologies always carry inherent risks, and AI has those same issues, as well as a host of its own.

On Wednesday, the Board of Governors of the Federal Reserve System, the Bureau of Consumer Financial Protection, the Federal Deposit Insurance Corporation, the National Credit Union Administration and the Office of the Comptroller of the Currency will publish a request for information in the Federal Register seeking feedback on AI uses and risk management in the financial sector.

All of these agencies have some regulatory oversight responsibility, including the use of new technologies and techniques, and the associated risks involved with any kind of innovation.

With appropriate governance, risk management, and compliance management, financial institutions use of innovative technologies and techniques, such as those involving AI, has the potential to augment business decision-making, and enhance services available to consumers and businesses, the request states.

Financial organizations are already using some AI technologies to identify fraud and unusual transactions, personalize customer service, help make decisions on creditworthiness, using natural language processing on text documents, and for cybersecurity and general risk management.

AIlike most other technologieshas the potential to automate some tasks and can help identify trends human analysts might have missed.

AI can identify relationships among variables that are not intuitive or not revealed by more traditional techniques, the RFI states. AI can better process certain forms of information, such as text, that may be impractical or difficult to process using traditional techniques. AI also facilitates processing significantly large and detailed datasets, both structured and unstructured, by identifying patterns or correlations that would be impracticable to ascertain otherwise.

That said, there are risks to deploying the new technologyas there are with any innovation disrupting a sectorsuch as automating discriminatory processes and policies, creating data leakage and sharing problems and new cybersecurity weaknesses.

But AI also carries its own specific challenges. The financial agencies cite explainability, broader or more intensive data usage and dynamic updating as examples.

The request for information seeks to understand respondents views on the use of AI by financial institutions in their provision of services to customers and for other business or operational purposes; appropriate governance, risk management, and controls over AI; and any challenges in developing, adopting, and managing AI.

The request also turns the tables on the agencies themselves, asking institutions about what assistance, regulations, laws and the like would help the sector better manage the promise and risk of AI.

The RFI includes 17 detailed questions. Responses are due 60 days after the publish date, or June 30.

Follow this link:

Regulators Want to Know How Financial Institutions Use AI and How They're Mitigating Risks - Nextgov

Related Posts