Ethical AI: What can the world learn from California? – World Economic Forum

Posted: October 20, 2019 at 10:32 pm

Amid growing concern over the threat of AI-enabled systems to perpetuate discrimination and bias and infringe upon privacy, California has introduced several bills intended to curb negative impacts. Primary among them are bills related to mitigating the negative impacts of specific AI-enabled technologies such as facial recognition systems. On May 14, 2019, San Francisco became the first major US city to ban the use of facial recognition technology by city agencies and law enforcement. Two months later, the neighbouring city of Oakland implemented similar restrictions.

These may be city-level laws, but their passing has influenced state and federal legislation. In California, a bill called the Body Camera Accountability Act seeks to prohibit the use of facial recognition in police body cameras, while another would require businesses to publicly disclose their use of facial recognition technology. At the federal level, four pieces of legislation are currently being proposed to limit the use of this technology, especially in law enforcement.

In the wake of the EUs transformative General Data Protection Regulation, California passed the US first domestic data privacy law. The California Consumer Privacy Act (CCPA) became law in 2018 and is set to go into effect in January 2020. The CCPA gives consumers the right to ask businesses to disclose the data they hold on them, request deletion of data, restrict the sale of their data to third parties, and sue for data breaches. This Act has made its influence felt at the federal level too, prompting the development of a federal data privacy law. These data privacy laws are particularly relevant to data-dependent fields like AI.

In response to the serious threat that AI-enabled bots and deepfakes pose for election integrity, the California government has pushed forward progressive pieces of legislation that have influenced federal and international efforts. Passed in 2018, the Bots Disclosure Act makes it unlawful to use a bot to influence a commercial transaction or a vote in an election without disclosure in California. This includes bots deployed by companies in other states and countries, which requires those companies to either develop bespoke standards for Californian residents or harmonize their strategies across jurisdictions to maintain efficiency. At the federal level, the Bots Disclosure and Accountability Act includes many of the same strategies proposed in California. The California Anti-Deepfakes Bill seeks to mitigate the spread and impact of malicious political deepfakes before an election and the federal Deepfakes Accountability Act seeks to do the same.

While California may be leading the implementation of responsible AI governance strategies, ill-conceived laws, especially those that influence similar strategies at federal and international levels, will cause more harm than good. Take for example the Bots Disclosure Act; some commentators have decried a lack of clarity in the Act around what is and is not determined to be a bot and the roles and responsibilities of parties, especially platforms, to identify and stem the influence of malicious bots. This weakens its implementability and impact. Federal initiatives modeled after Californias law will serve to only further erode accountability and public trust.

There is also the risk that beneficial legislation could become unhelpfully politicized. We are seeing increasing federal pushback against the California effect, as exemplified by recent efforts to revoke Californias ability to implement stricter emission standards than federal guidelines. Federal initiatives may seek to curtail the states impact on national and international standards for responsible AI governance. This is already being witnessed in federal efforts to preempt the CCPA.

California is quickly pushing forward AI legislation, ranging from oversight over discrimination and bias to protecting privacy and election integrity. Californias progressive AI legislation has already had a marked influence on federal efforts, and will likely have global reach if California-based AI companies, including Google, Facebook, and OpenAI, alter their practices. The state has an opportunity and obligation to lead the way in establishing effective standards and oversight that ensures AI systems are developed and deployed in a safe and responsible manner. California can provide guidance on responsible AI governance for the rest of the country and the world, but caution must be taken to implement due diligence in identifying and mitigating any negative impacts before its too late.

License and Republishing

World Economic Forum articles may be republished in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

More here:

Ethical AI: What can the world learn from California? - World Economic Forum

Related Posts