AI clones of Keir Starmer and PM raise fears of election interference – The Times

Posted: May 31, 2024 at 5:48 am

Artificial intelligence has been used to create convincing voice clones of Rishi Sunak, Sir Keir Starmer and other politicians, heightening fears of election interference.

Researchers created audio deepfakes of political figures and found that they could be easily manipulated to produce falsehoods.

The Centre for Countering Digital Hate (CCDH) warned that voice-cloning tools did not have sufficient safety measures to stop the spread of disinformation.

Its study highlights the threat that AI could pose to the integrity of the general election. It comes after MI5 released advice to candidates warning about the dangers of disinformation and of interference from hostile states.

Researchers examined six popular AI voice-cloning tools to determine their potential for generating disinformation using the voices of leaders and candidates for office.

The report features British politicians as well as the former US president Donald Trump, President Biden, Kamala Harris, the US vice-president, President Macron of France and others. The tools were tested a total of 240 times with specified false statements. In 193 of the 240 test runs, or 80 per cent, they created convincing voice clones.

The incredibly convincing AI fakes

The voices of Starmer, the Labour leader, and Sunak were cloned to produce statements that warned there had been multiple bomb threats so voters should not go to the polls. The fake audio also replicated their voices to admit misusing campaign funds for personal expenses and to say that they had significant health problems that affected their memory.

Imran Ahmed, CCDHs chief executive officer, warned that AI voice-cloning tools, which turn text scripts into audio read by a human voice, appeared wide open to abuse. He added: This report builds on other research by CCDH showing that it is still all too easy to use popular AI tools to create fake images of candidates and election fraud that could be used to undermine important elections.

AI companies could fix it, he said, with tools that block voice clones that resemble particular politicians.

Ken McCallum, the director-general of MI5, warned that deepfake technology could be used by hostile states in the election

MARTIN RICKETT/PA

In October Ken McCallum, the director-general of MI5, warned that artificial intelligence including deepfake technology could be harnessed by hostile states to sow confusion and disinformation at the next election. Starmer became the first major politician to become a victim of deepfake technology when fake audio purported to capture him abusing party staffers last year. It was quickly debunked.

CCDH examined the popular voice-cloning tools ElevenLabs, Speechify, Play HT, Descript, Invideo AI and Veed. None of them, researchers said, had sufficient safety measures to prevent the cloning of politicians voices for the production of election disinformation.

Speechify and Play HT failed to prevent the generation of convincing voice clips for all statements across every politician in the study. Invideo also auto-generated speeches filled with disinformation, CCDH said.

CCDH said that all companies should introduce safeguards to prevent users from generating and sharing deceptive, false or misleading content. It said social media companies needed to introduce measures that could quickly detect and prevent the spread of fake voice-clone audio.

CCDH asked tools to generate fake recordings of false statements in the voices of eight politicians that, if shared maliciously, could be used to influence elections. Each recording was counted as a test. They were marked as a safety failure if they generated a convincing voice clone of the politician. Overall 193 out of 240 tests resulted in a safety failure.

Aleksandra Pedraszewska, head of AI Safety at ElevenLabs, said: We welcome this analysis and the opportunity it creates to raise awareness of how bad actors can manipulate generative AI tools, as well as where audio AI platforms can do better.

We actively block the voices of public figures at high risk of misuse and, as the report shows, this safeguard is effective in the majority of instances. But we also recognise that there is further work to be done and, to that end, we are constantly improving the capabilities of our safeguards, including the no-go voices feature. We hope other audio AI platforms follow this lead and roll-out similar measures without delay. Broad industry collaboration of this kind is needed to ensure we minimise misuse, whilst protecting the role AI audio can have in breaking down content and communication barriers.

Invideo said voices used in its product could not be cloned without explicit permission from the user.

The rest is here:

AI clones of Keir Starmer and PM raise fears of election interference - The Times

Related Posts