U.S. Senate panel weighs free speech and deep fakes in AI … – Minnesota Reformer

Posted: September 28, 2023 at 5:20 am

Artificial intelligence could be used to disrupt U.S. election campaigns, members of the U.S. Senate Committee on Rules and Administration said during a Wednesday hearing.

But the hearing showed that imposing laws and regulations on campaign content without violating constitutional rights to political speech will be difficult.

Elections pose a particular challenge for AI, an emerging technology with potential to affect many industries and issues, committee Chair Amy Klobuchar, a Minnesota Democrat, said. AI can make it easier to doctor photos and videos, creating fictional content that appears real to viewers.

Klobuchar called that untenable for democracy.

Klobuchar said the hearing underscored the need for Congress to impose guardrails for the use of AI in elections. Klobuchar is the lead sponsor of a bipartisan bill, with Republicans Josh Hawley of Missouri and Susan Collins of Maine and Democrat Chris Coons of Delaware, that would ban the use of AI to make deceptive campaign materials.

With AI, the rampant disinformation we have seen in recent years will quickly grow in quantity and quality, she said. We need guardrails to protect our elections.

But some Republicans on the panel, and two expert witnesses, also warned that regulating AIs use in elections would be difficult and perhaps unwise because of the potential impact on First Amendment-protected political speech.

A law prohibiting AI-generated political speech would also sweep an enormous amount of protected and even valuable political discourse under its ambit, said Ari Cohn, free speech counsel for TechFreedom, a technology think tank.

Klobuchar twice used an example of AI-generated deep-fake images, meaning wholly false images meant to look real, that appeared to show former President Donald Trump hugging Anthony Fauci. Fauci is the former leader of the National Institute of Allergy and Infectious Diseases who is deeply unpopular with some sections of the Republican electorate because of his positions on COVID-19.

The images were used in a campaign ad by Florida Gov. Ron DeSantis, who along with Trump, is running for the 2024 Republican nomination for president.

Trevor Potter, the former chair of the Federal Election Commission, testified that election laws are intended to help voters by requiring transparency about who pays for political speech and who is speaking. AI could upend those goals and make interference by foreign or domestic adversaries easier, he said.

Unchecked, the deceptive use of AI could make it virtually impossible to determine who is truly speaking in a political communication, whether the message being communicated is authentic or even whether something being depicted actually happened, Potter said. This could leave voters unable to meaningfully evaluate candidates and candidates unable to convey their desired message to voters, undermining our democracy.

Klobuchar asked the panel of five witnesses if they agreed AI posed at least some risk to elections, which they appeared to affirm.

Misinformation and disinformation in elections is particularly important for communities of color, said Maya Wiley, the CEO of the Leadership Conference On Civil And Human Rights.

Black communities and those whose first language is not English have been disproportionately targeted in recent elections, including material generated by Russian agents in 2016, she said.

Misinformation in campaigns has been attempted without AI, said Neil Chilson, a researcher at the Center For Growth And Opportunity at Utah State University. Deception, not the technology, is the problem, he said.

If the concern is with a certain type of outcome, lets focus on the outcome and not the tools used to create it, Chilson said in response to questioning from ranking Republican Deb Fischer of Nebraska.

Writing legislation narrowly enough to target deceptive uses of AI without interfering with common campaign practices would be difficult, Chilson said.

I know we all use the term deep fake, but the line between deep fake and tweaks to make somebody look slightly younger in their ad is pretty blurry, Chilson said. And drawing that line in legislation is very difficult.

If a federal law existed, especially with heavy penalties, the result would be to chill a lot of speech, he added.

U.S. Sen. Bill Hagerty, a Tennessee Republican, said he didnt trust the Biden administration and Congress to properly balance concerns about fraudulent material with speech rights and fostering the emergence of AI, which has the potential for many positive uses in addition to possible nefarious ones.

While he said he saw issues with AI, Congress should be careful in its approach, he said.

Congress and the Biden administration should not engage in heavy-handed regulation with uncertain impacts that I believe pose a great risk to limiting political speech, he said. We shouldnt immediately indulge the impulse for government to just do something, as they say, before we fully understand the impacts of the emerging technology, especially when that something encroaches on political speech.

Responding to Hagerty, Klobuchar promoted her bill that would ban outright fraud that is created by AI.

That is untenable in a democracy, she said.

Minnesota Secretary of State Steve Simon testified that while it may be difficult for courts and lawmakers to determine what content crosses a line into fraud, there are ways to navigate those challenges.

Sen. Hagerty is correct and right to point out that this is difficult and that Congress and any legislative body needs to get it right, he said. But though the line-drawing exercise might be difficult, courts are equipped to draw that line.

Congress should require disclaimers for political ads that use AI, but such a requirement shouldnt replace the power to have content removed from television, radio and the internet if it is fraudulent, Klobuchar said.

She added that in addition to banning the most extreme fraud, her priorities in legislation that could see action this year would be to give the FEC more authority to regulate AI-generated content, and requiring disclaimers from platforms that carry political ads.

Read the original here:
U.S. Senate panel weighs free speech and deep fakes in AI ... - Minnesota Reformer

Related Posts