The Problem with AI Licensing & an FDA for Algorithms – The Federalist Society

Posted: June 10, 2023 at 8:22 pm

Last year, we released a study for the Federalist Society predicting The Coming Onslaught of Algorithmic Fairness Regulations. That onslaught has now arrived. Interest in artificial intelligence (AI) and its regulation has exploded at all levels of government, and now some policymakers are floating the idea of licensing powerful AI systems and perhaps creating a new FDA for algorithms, complete with a pre-market approval regime for new AI applications. Other proposals are on the table, including transparency mandates requiring government-approved AI impact statements or audits, nutrition labels for algorithmic applications, expanded liability for AI developers, and perhaps even a new global regulatory body to oversee AI development.

Its a dangerous regulatory recipe for technological stagnation that threatens to derail Americas ability to be a leader in the Computational Revolution and build on the success the nation has enjoyed in the digital economy over the past quarter century.

The Coming Avalanche of AI Regulation

The Biden Administration set a dour tone for AI policy with the release last October of its 73-page Blueprint for an AI Bill of Rights. Although touted as a voluntary framework, this Bill of Rights is more like a bill of regulations. The document mostly focused on worst-case scenarios that might flow from the expanded development of AI, machine learning, and robotics. On May 23, the White House announced a new Request for Information on national priorities for mitigating AI risks.

The Department of Commerce also recently launched a proceeding on AI accountability policy, teasing the idea of algorithmic impact assessments and AI audits as a new governance solution. Meanwhile, in a series of recent blog posts, the Federal Trade Commission has been hinting that it might take some sort of action on AI issues, and the Equal Employment Opportunity Commission last week announced new guidance on AI and employment issues. At the state and local level, over 80 bills are pending or have been enacted to regulate or study AI issues in some fashion.

In Congress, Senate Majority Leader Chuck Schumer (D-N.Y.) is readying a new law requiring responsible AI, which is likely to include some sort of AI transparency or explainability mandate. In the last session of Congress, the Algorithmic Accountability Act of 2022 was proposed, which would have required that AI developers perform impact assessments and file them with a new Bureau of Technology inside the FTC.

On May 16, the U.S. Senate Judiciary Committee held a hearing on, Oversight of A.I.: Rules for Artificial Intelligence. Senators and the witnesses expressed a variety of fears about how AI could lead to disinformation, discrimination, job loss, safety issues, intellectual property problems, and so-called existential risks.

The hearing was memorable for how chummy OpenAI CEO Sam Altman was with members of the committee. Many members openly gushed about how much they appreciated the willingness of Altman and other witnesses to preemptively call for AI regulation. In fact, Sen. Dick Durbin used the term historic to describe the way tech firms were coming in and asking for regulation. Durbin said AI firms were telling him and other members, Stop me before I innovate again! which gave him great joy, and he said that the only thing that mattered now is how we are going to achieve this.

Many regulatory ideas were floated by Senators and embraced at least in part by the witnesses, including a formal licensing regime for powerful AI systems and a new federal bureaucracy to enforce it.

The Problem with a New AI Regulator

Is another regulatory agency the answer? Its not like America lacks capacity to address artificial intelligence developments. The federal government has 2.1 million civilian workers, 15 cabinet agencies, 50 independent federal commissions, and over 430 federal departments altogether. Many of these bodies are already contemplating how AI touches their field. Regulatory agencies like the National Highway Traffic Safety Administration, the Food and Drug Administration, and the Consumer Product Safety Commission also have broad oversight and recall authority, allowing them to remove defective or unsafe products from the market. Consumer protection agencies like the Federal Trade Commission and comparable state offices will also police markets for unfair and deceptive algorithmic practices.

But now some policymakers and advocates want to add yet another federal bureaucracy. The idea of a new digital technology regulator has been proposed before. In fact, the idea was something of a fad in 2019 and 2020, a peak of political outrage over social media. One of us wrote a report chapter analyzing and addressing the most prominent of the digital tech regulation proposals. That same analysis applies to more recent calls for an AI regulator, especially since at least one of the recent legislative proposals is practically identical to earlier proposals.

Creating a new regulatory agency for AI would be a dramatic change in the U.S. approach to technology regulation. The U.S. has never had a regulator for general purpose technologies such as software, computers, or consumer electronics. Instead, governance over these technologies has been through a mix of common law, consumer protection standards, application-specific regulation (such as health care devices and transportation), and market competition.

There is a good reason why we havent established a general purpose technology regulator, and those reasons extend to an AI regulator. Any proposal for a new regulatory agency for AI faces two substantial challenges: identifying the area of expertise that would justify a separate agency, and avoiding regulatory capture.

What Expertise? The generally accepted reason for creating a new agency is division of labor by expertisein a word, specialization. To justify a new agency, then, one must identify an unsatisfied need for unique expertise. An agency has no comparative advantage over Congress if the knowledge to solve the problem is widely available or easily accessible. On the other hand, assigning unrelated problems requiring different expertise to the same agency is inefficient; itd be better to delegate such issues to different agencies already possessing the relevant expertise.

When it comes to AI, there is a common core of technical knowledge. But AI is a general purpose form of computation. The applications span every industry. The risk profiles of applications in, say, transportation or policing are quite different from the risk profiles in, say, music or gaming. While there may be some advantage in collecting the technical expertise in one place, the policy expertise to judge whether and how different uses of AI should be regulated gains little or nothing from being consolidatedand in fact, the relevant policy expertise on various applications already resides in dozens of existing agencies.

Another way to say this is that an agency with jurisdiction over all uses of AI would be an economy-wide regulator. The result would not be a specialized agency to supplement Congress, but a shadow legislator that would replace Congress (as well as parts of dozens of other agencies).

Risk of Regulatory Capture. All agencies tend toward regulatory capture, where the agency serves the interests of the regulated parties instead of the public. But industry-specific rulemaking regulators have the highest risk of regulatory capture in part because the agency and the industry have a shared interest in not being disrupted by new developments. In the fast-paced and highly innovative field of AI, incumbents who help develop the initial regulatory approach would benefit from raising rivals regulatory costs. This could stifle competition and innovation, potentially leaving the public worse off than if there were no dedicated AI regulatory agency at all.

A new AI-specific regulatory body is would not be justified by specific expertise, and the risk of regulatory capture would be high. There is no specific policy expertise that could be concentrated in a single agency without the agency becoming a miniature version of all of government. And doing so would most likely favor todays leading AI companies and constrain other models, such as open source.

The Transparency Trap

For these and other reasons, devising and funding a new federal AI agency would be contentious once Congress started negotiating details. In the short term, therefore, it is more likely that policymakers will push for some sort of transparency regulatory regime for AI. The goal would be to make algorithms more explainable by requiring the revelation of information about the data powering them or the specific developer preferences regarding how tools and applications are tailored. This would be accomplished through nutrition labels for AI, mandated impact assessments prior to product release, or audits after the fact.

But explainability is easier in theory than reality. Practically speaking, we know that transparency mandates around privacy and even traditional food nutrition labels have little impact on consumer behavior. And AI has the additional difficulty of figuring out what exactly can be disclosed accurately. Even the humans who train deep networks generally cannot look under the hood and provide explanations for the decision their networks make, notes Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans. This confusion would be magnified if policymakers enforce AI transparency through mandated AI audits and impact assessments from those who develop or deploy algorithmic systems.

Companies are motivated to produce useful and safe services that their users desire. Industry best practices, audits, and impact assessments can play a useful role in the market process for AI companies, as they already do for financial practices, workplace safety, supply chain issues, and more.

What we ought to avoid is a convoluted, European-style top-down regulatory compliance regime, the kind already enshrined in the E.U.'s forthcoming AI Act, which includes costly requirements for prior conformity assessments for many algorithmic services. Such approaches fail for a number of reasons:

Algorithmic auditing is inherently subjective. Auditing algorithms is not like auditing an accounting ledger, where the numbers either do or do not add up. Companies, regulators, and users can have differing value preferences. Algorithms have to make express or implied tradeoffs between privacy, safety, security, objectivity, accuracy, and other values in a given system. There is no scientifically correct answer to the question of how to rank these values.

Rapid iteration and evolution. AI systems are being shipped and updated on a weekly or even daily basis. Requiring formal signoff on audits or assessmentsmany of which would be obsolete before they were completedwould slow the iteration cycle. And converting audits into a formal regulatory process would create several veto points that opponents of AI could use to slow progress in the field. AI developers would likely look to innovate in other jurisdictions if auditing or impact assessments became a bureaucratic and highly convoluted compliance nightmare.

Finally, legislatively mandated algorithmic auditing could also give rise to the problem of significant political meddling in speech platforms powered by algorithms, which could have serious free speech implications. If code is speech, then algorithms are speech too.

More Constructive Approaches

Rather than licensing AI development through a new federal agency, there is a better way.

First, politicians and regulators ought to drill down. Policymakers should understand that AI isn't a singular, overarching technology, but a diverse range of technologies with different applications. Each specific area of application of AI should be assessed for potential benefits and risks. This should involve a detailed examination of how AI is used, who is affected by these uses, and what outcomes might be expected. A balance should be sought to maximize benefits while minimizing risks.

An important part of evaluating a specific use is understanding the role markets, reputation, and consumer demand play in aligning each use with the public interest. Each area of AI application could have unique market pressures and mechanisms for dealing with that pressure, such as user education, private codes of conduct, and other soft law mechanisms. These established practices could obviate the need for regulation or help identify where gaps remain.

After assessing the various AI applications and market conditions, regulators should prioritize areas where high risks are not effectively addressed by norms or by existing regulatory bodies such as the Department of Transportation or Food and Drug Administration. This prioritization would ensure that the most urgent and potentially harmful areas receive adequate regulatory attention. In addressing these gaps, policymakers should look first to how to supplement existing agencies with experience in the industry area where AI is being applied.

We do not need a new agency to govern AI. We need a better, more detailed understanding of the opportunities and risks of specific applications of AI. Policy makers should take the time to develop this understanding before jumping to create a whole new agency. There is much to be done to ensure the benefits and minimize the risks of AI, and there is no silver bullet. Instead, policy makers should gird themselves for a long process of investigating and addressing the issues raised by specific applications of AI. Its not as flashy as a new agency, but its far more likely to address the concerns without killing the beneficial uses.

Note from the Editor: The Federalist Society takes no positions on particular legal and public policy matters. Any expressions of opinion are those of the author. To join the debate, please email us atinfo@fedsoc.org.

View post:

The Problem with AI Licensing & an FDA for Algorithms - The Federalist Society

Related Posts