AI legislation must address bias in algorithmic decision-making systems – VentureBeat

Posted: July 12, 2021 at 7:52 am

Join AI & data leaders at Transform 2021 on July 12th for the AI/ML Automation Technology Summit. Register today.

In early June, border officials quietly deployed the mobile app CBP One at the U.S.-Mexico border to streamline the processing of asylum seekers. While the app will reduce manual data entry and speed up the process, it also relies on controversial facial recognition technologies and stores sensitive information on asylum seekers prior to their entry to the U.S. The issue here is not the use of artificial intelligence per se, but what it means in relation to the Biden administrations pre-election promise of civil rights in technology, including AI bias and data privacy.

When the Democrats took control of both House and Senate in January, onlookers were optimistic that there was an appetite for a federal privacy bill and legislation to stem bias in algorithmic decision-making systems. This is long overdue, said Ben Winters, Equal Justice Works Fellow of the Electronic Privacy Information Center (EPIC), who works on matters related to AI and the criminal justice system. The current state of AI legislation in the U.S. is disappointing, [with] a majority of AI-related legislation focused almost solely on investment, research, and maintaining competitiveness with other countries, primarily China, Winters said.

But there is some promising legislation waiting in the wings. The Algorithmic Justice and Online Platform Transparency bill, introduced by Sen. Edward Markey and Rep. Doris Matsui in May, clamps down on harmful algorithms, encourages transparency of websites content amplification and moderation practices, and proposes a cross-government investigation into discriminatory algorithmic processes throughout the economy.

Local bans on facial recognition are also picking up steam across the U.S. So far this year, bills or resolutions related to AI have been introduced in at least 16 states. They include California and Washington (accountability from automated decision-making apps); Massachusetts (data privacy and transparency in AI use in government); Missouri and Nevada (technology task force); and New Jersey (prohibiting certain discrimination by automated decision-making tech). Most of these bills are still pending, though some have already failed, such as Marylands Algorithmic Decision Systems: Procurement and Discriminatory Acts.

The Wyden Bill from 2019 and more recent proposals, such as the one from Markey and Matsui, provide much-needed direction, said Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University. Companies are looking to the federal government for guidance and standards-setting, Lin said. Likewise, AI laws can protect technology developers in the new and tricky cases of liability that will inevitably arise.

Transparency is still a huge challenge in AI, Lin added: Theyre black boxes that seem to work OK even if we dont know how but when they fail, they can fail spectacularly, and real human lives could be at stake.

Though the Wyden Bill is a good starting point to give the Federal Trade Commission broader authority, requiring impact assessments that include considerations about data sources, bias, fairness, privacy, and more, it would help to expand compliance standards and policies, said Winters. The main benefit to [industry] would be some clarity about what their obligations are and what resources they need to devote to complying with appropriate regulations, he said. But there are drawbacks too, especially for companies that rely on fundamentally flawed or discriminatory data, as it would be hard to accurately comply without endangering their business or inviting regulatory intervention, Winters added.

Another drawback, Lin said, is that even if established players support a law to prevent AI bias, it isnt clear what bias looks like in terms of machine learning. Its not just about treating people differently because of their race, gender, age, or whatever, even if these are legally protected categories, Lin said. Imagine if I were casting for a movie about Martin Luther King, Jr. I would reject every actor who is a teenage Asian girl, even though Im rejecting them precisely because of age, ethnicity, and gender. Algorithms, however, dont understand context.

The EUs General Data Protection Regulation (GDPR) is a good example to emulate, even though its aimed not at AI specifically, but on underlying data practices. GDPR was fiercely resisted at first but its now generally regarded as a very beneficial regulation for individual, business, and societal interests, Lin said. There is also the coercive effect of other countries signing an international law, making a country think twice or three times before it acts against the treaty and elicits international condemnation. Even if the US is too laissez-faire in its general approach to embrace guidelines [like the EUs], they still will want to consider regulations in other major markets.

The rest is here:

AI legislation must address bias in algorithmic decision-making systems - VentureBeat

Related Posts