US/EU Initiative Spotlights Cooperation, Differing Approaches To Regulation Of Artificial Intelligence Systems – Privacy – Worldwide – Mondaq News…

To print this article, all you need is to be registered or login on Mondaq.com.

In late September 2021, representatives from the U.S. and theEuropean Union met to coordinate objectives related to the U.S.-EUTrade and Technology Council, and high on the Council's agendawere the societal implications of the use of artificialintelligence systems and technologies ("AISystems"). The Council's public statements on AISystems affirmed its "willingness and intention to develop andimplement trustworthy AI" and a "commitment to ahuman-centric approach that reinforces shared democraticvalues," while acknowledging concerns that authoritarianregimes may develop and use AI Systems to curtail human rights,suppress free speech, and enforce surveillance systems. Given theincreasing focus on the development and use of AI Systems from bothusers and investors, it is becoming imperative for companies totrack policy and regulatory developments regarding AI on both sidesof the Atlantic.

At the heart of the debate over the appropriate regulatorystrategy is a growing concern over algorithmic bias thenotion that the algorithm powering the AI Systems in question hasbias "baked in" that will manifest in its results.Examples of this issue abound job applicant systemsfavoring certain candidates over others, or facial recognitionsystems treating African Americans differently than Caucasians,etc. These concerns have been amplified over the last 18 months associal justice movements have highlighted the real-worldimplications of algorithmic bias.

In response, some prominent tech industry players have postedposition statements on their public-facing websites regarding theiruse of AI Systems and other machine learning practices. Thesestatements typically address issues such as bias, fairness, anddisparate impact stemming from the use of AI Systems, but often arenot binding or enforceable in any way. As a result, these publicstatements have not quelled the debate around regulating AISystems; rather, they highlight the disparate regulatory regimesand business needs that these companies must navigate.

When the EU's General Data Protection Regulation("GDPR") came into force in 2018, itprovided prescriptive guidance regarding the treatment of automateddecision-making practices or profiling. Specifically, Article 22 isgenerally understood to implicate technology involving AI Systems.Under that provision, EU data subjects have the right not to besubject to decisions based solely on automated processing (andwithout human intervention) which may produce legal effects for theindividual. In addition to Article 22, data processing principlesin the GDPR, such as data minimization and purpose limitationpractices, are applicable to the expansive data collectionpractices inherent in many AI Systems.

Consistent with the approach enacted in GDPR, recently proposedEU legislation regarding AI Systems favors tasking businesses,rather than users, with compliance responsibilities. The EU'sArtificial Intelligence Act (the "Draft AI Regulation"),released by the EU Commission in April 2021, would requirecompanies (and users) who use AI Systems as part of their businesspractices in the EU to limit the harmful impact of AI. If enacted,the Draft AI Regulation would be one of the first legal frameworksfor AI designed to "guarantee the safety and fundamentalrights of people and businesses, while strengthening AI uptake,investment and innovation across the EU." The Draft AIRegulation adopts a risk-based approach, categorizing AISystems as unacceptable risk, high risk, and minimal risk. Much ofthe focus and discussion with respect to the Draft AI Regulationhas concerned (i) what types of AI Systems are consideredhigh-risk, and (ii) the resulting obligations on such systems.Under the current version of the proposal, activities that would beconsidered "high-risk" include employee recruiting andcredit scoring, and the obligations for high-risk AI Systems wouldinclude maintaining technical documentation and logs, establishinga risk management system and appropriate human oversight measures,and requiring incident reporting with respect to AI Systemmalfunctioning.

While AI Systems have previously been subject to guidelines fromgovernmental entities and industry groups, the Draft AI Regulationwill be the most comprehensive AI Systems law in Europe, if not theworld. In addition to the substantive requirements previewed above,it proposes establishing an EU AI board to facilitateimplementation of the law, allowing Member State regulators toenforce the law, and authorizing fines up to 6% of acompany's annual worldwide turnover. The draft law will likelybe subject to a period of discussion and revision with thepotential for a transition period, meaning that companies that dobusiness in Europe or target EU data subjects will have a few yearsto prepare.

Unlike the EU, the U.S. lacks comprehensive federal privacylegislation and has no law or regulation as specifically tailoredto AI activities. Enforcement of violations of privacy practices,including data collection and processing practices through AISystems, primarily originates from Section 5 of the Federal TradeCommission ("FTC") Act, which prohibitsunfair or deceptive acts or practices. In April 2020, the FTCissued guidance regarding the use of AI Systems designed to promotefairness and equity. Specifically, the guidance directed that theuse of AI tools should be "transparent, explainable, fair, andempirically sound, while fostering accountability." The changein administration has not changed the FTC's focus on AIsystems. First, public statements from then-FTC Acting ChairRebecca Slaughter in February 2021 cited algorithms that result inbias or discrimination, or AI-generated consumer harms, as a keyfocus of the agency. Then, the FTC addressed potential bias in AISystems on its website in April 2021 and signaled that unlessbusinesses adopt a transparency approach, test for discriminatoryoutcomes, and are truthful about data use, FTC enforcement actionsmay result.

At the state level, recently enacted privacy laws in California,Colorado and Virginia will enable consumers in those states toopt-out of the use of their personal information in the context of"profiling," defined as a form of automated processingperformed on personal information to evaluate, analyze, or predictaspects related to individuals. While AI Systems are notspecifically addressed, the three new state laws require datacontrollers (or equivalent) to conduct data protection impactassessments to determine whether processing risks associated withprofiling may result in unfair or disparate impact to consumers. Inall three cases, yet-to-be promulgated implementing regulations mayprovide businesses (and consumers) with additional guidanceregarding operationalizing automated decision-making requests upuntil the laws' effective dates (January 2023 for Virginia andCalifornia, July 2023 for Colorado).

Proliferating use of AI Systems has dramatically increased thescale, scope, and frequency of processing of personal information,which has led to an accompanying increase in regulatory scrutiny toensure that harms to individuals are minimized. Businesses thatutilize AI Systems should adopt a comprehensive governance approachto comply with both the complimentary and divergent aspects of theU.S. and EU approaches to the protection of individual rights.Although laws governing the use of AI Systems remain in flux onboth sides of the Atlantic, businesses that utilize AI in theirbusiness practices should consider asking themselves the followingquestions:

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

See the original post:
US/EU Initiative Spotlights Cooperation, Differing Approaches To Regulation Of Artificial Intelligence Systems - Privacy - Worldwide - Mondaq News...

Related Posts
This entry was posted in $1$s. Bookmark the permalink.