The disinformation threat from text-generating AI – Axios

Posted: May 20, 2021 at 4:42 am

A new report lays out the ways that cutting-edge text-generating AI models could be used to aid disinformation campaigns.

Why it matters: In the wrong hands text-generating systems could be used to scale up state-sponsored disinformation efforts and humans would struggle to know when they're being lied to.

How it works: Text-generating models like OpenAI's leading GPT-3 are trained on vast volumes of internet data, and learn to write eerily life-like text off human prompts.

What they found: While "no currently existing autonomous system could replace the entirety of the IRA," algorithmically based tech paired with experienced human operators produces results that are nothing less than frightening.

What to watch: While OpenAI has tightly restricted access to GPT-3, Buchanan notes that it's "likely that open source versions of GPT-3 will eventually emerge, greatly complicating any efforts to lock the technology down."

The bottom line: Like much of social media more broadly, the report's authors write that systems like GPT-3 seem "more adept as fabulists than as staid truth-tellers."

Continue reading here:

The disinformation threat from text-generating AI - Axios

Related Posts