Professor Warns of "Nightmare" Bots That Prey on Vulnerable People – Futurism

Posted: May 4, 2021 at 8:15 pm

Imagine that you make a new friend on Twitter. Their pithy statements stop you mid-scroll, and pretty soon you find yourself sliding into their DMs.

You exchange a few messages. You favorite each others tweets. If they need a hand on GoFundMe, you help out.

Now imagine how youd feel if you found out your friend didnt really exist. Their profile turns out to be a Frankensteinian mashup of verbiage dreamed up by the powerful language generator GPT-3 and a face born from a generative adversarial network, perhaps with a deepfaked video clip thrown in here and there. How would it affect you to learn that you had become emotionally attached to an algorithm? And what if that person was designed to manipulate you, influencing your personal, financial, or political decisions like a garden-variety scammer or grifter?

It might sound far-fetched, but people have been fooled by computers masquerading as human since as far back as 1966, when MIT computer scientist Joseph Weizenbaum created the ELIZA program. ELIZA was built to simulate a psychotherapist by parroting peoples statements back to them in the form of questions. Weizenbaum was unsettled by how seriously users reacted to it famously, his secretary asked him to leave the room while she was talking to it and he ultimately became a critic of artificial intelligence.

Now, the tech is closer than ever to creating believable ersatz online people. Simon DeDeo, an assistant professor at Carnegie Mellon University and external faculty at the Santa Fe Institute, tweeted last summer that his current nightmare application is blending GPT-3, facial GANs, and voice synthesis to make synthetic ELIZAs that drive vulnerable people (literally) crazy.

I recently asked DeDeo how far he thought we were from his nightmare becoming a technological reality.

I think its already happened, he said.

We spoke shortly after the internet had gone into a frenzy over a series of viral TikTok videos that appeared to show Tom Cruise doing random activities like magic tricks and golfing, but were really deepfakes made by a VFX specialist and a Cruise impersonator. The videos are impressive. But, like their creator, DeDeo doesnt believe the Cruise fakes are a cause for alarm.

Preventing the next fake celebrity TikTok video, he said, is not the concern we should focus on. Rather, he said, the Cruise deepfake is revealing something thats been sitting there for a long time.

He likened the coverage of the TikTok fakes to the way the media reacts to a plane crash, when driving is statistically much more dangerous. In the case of artificial intelligence, like transportation, certain dramatic events draw our attention, but were actually surrounded by these much smaller scale, less salient events constantly.

In fact, hes not terribly worried about videos at all. In his nightmare scenario, hes much more concerned with how GPT-3 can be used to generate language that sounds realistic.

I can con you without needing to fake a video, he said. And the way I con you is not by tricking your visual system, which the video deepfakes do. I con you by tricking your rational system. I con you at a much higher level in your social cognition.

Social is the key word here, because the omnipresent, small-scale fakes DeDeo is talking about thrive on social media. In a way, thats all social media is; DeDeo frequently referred to the interactions we have on sites like Facebook as cyborgian.

Think about conspiracy theories like QAnon, which thrive on social media in a way they probably couldnt in person. People who get sucked into these kinds of communities are constantly bolstered by likes and comments on their social media posts, even as they alienate their real-life friends and family, as in the case of Valerie Gilbert, a self-described QAnon meme queen.

The internet users who form the QAnon community are, as far as we know, actual humans. But their actions are filtered through social medias algorithms to make a community that isnt quite organic, and which could allow dark new entities like DeDeos nightmare to thrive.

Its like soylent green QAnon is made of people, DeDeo said. But those bizarrely artificial societies are sustained by algorithms that fake sociality, that fake friendship, that fake prestige, that fake all of these things that are just kind of basic to our cognition.

And algorithms that reinforce potentially toxic or dangerous posts, like QAnon memes, arent likely to be changed by Facebook any time soon, because the increased engagement means people spend more time on the website.

This artificial interaction isnt limited to conspiracy theorists. Another example DeDeo pointed to is the odd behavior that can be observed among the users of small Twitter accounts when a tweet suddenly goes viral. You can watch a person be literally driven temporarily insane by whats happening to them, he said, comparing it to a deepfake experience of becoming emperor of the world.

These are some of the more visible examples, but the cyborgian nature of social media is something that affects every user. Scrolling through your timeline is engaging with an algorithm; the updates you see are a constructed reality. Sure, there are humans behind the accounts, but the nature of your interaction with them has been intrinsically changed by a computer. Its like everybodys voices coming out slightly distorted and at slightly the wrong volume, said DeDeo.

The Facebook account attached to your friend from high school? he asked. In one sense, your friend from high schools operating that account, but in another really important sense hes not. Because by what Facebook chooses to show you of him, the interactions it chooses to show you that it has, and the one that it chooses to magnify or diminish, create a totally different person.

This may sound relatively harmless. So what if Facebook is highlighting and promoting some detail of your friends life that you never have noticed or discussed in a personal interaction? But its the phenomenons very innocuousness, according to DeDeo, that makes it dangerous.

Marketers talk about the phenomenon of social proof, in which humans will be more likely to do something or buy a product if they see that their social circle also does that thing or uses that product. The curation of social media content, possibly with a few small tweaks or additions here and there, could enable organizations to easily prey on that part of our psychology and influence our behavior. A persons social media could be altered to cause them to think doing something is a normal thing, a good thing, or a smart thing.

The person whos trying to profit doesnt have to make this truly pure thing where Tom Cruise says something he would never possibly say, explained DeDeo. The profitable version is cyborgian in the sense that, huh, it kind of looks like what my friend would say. In fact, its pretty close. And in fact most of what that thing just fed me is what he said.

This kind of cyborgian deepfake isnt necessarily limited to text posts. What if Facebook, for example, put in fake likes or hearts?

Almost certainly theyve tried it, DeDeo said. Theres no guarantee that if somebody hits the like button theyll see it, and theres no guarantee that if the like button is activated by a person that that person really truly activated it.

In addition to enabling manipulation of social media users, its easy to see how these kinds of fakes could lead people to doubt reality in fact, this has already happened and has been a concern of people following AI development for years.

Another major malicious use of this technology and one thats vastly underreported compared to the hypothetical political impact of deepfakes is harassing and demeaning women. Fake pornography of celebrities is prolific, but deepfake porn of regular people is also a massive problem.

A visual threat intelligence company called Sensity discovered a porn bot embedded in the messaging app Telegram that allowed users to create deepfake nudes of women from just one profile picture; by the time it was discovered in late 2020, the bot had been used to generate fake porn of 680,000 women. Vox reported on a 2019 study that showed a stunning 96 percent of existing deepfake videos were pornographic and nonconsensual. An AI app that has since been taken offline allowed users to undress women. And in March, the mother of a teenage cheerleader was charged with harassment for creating incriminating deepfake images of other girls in the cheerleading program, including nude photos, and telling them to die by suicide.

So what now? How do we learn to recognize and discount deepfakes but keep a grasp on reality?

This is a big question in social science, said DeDeo, who believes it boils down to a notion of civics. He recalls being taught in school which institutions to trust and how to be a good consumer of news media. But we have no modern-day civics that helps, say, a 15- or 16- or 17-year-old reason about this crazy world, he said. The landscape is totally different.

What kind of curriculum would you develop to help people be adults? DeDeo wondered. He mentioned the cyborgian deepfakes again, lamenting that people are vulnerable to them because they hijack parts of our reasoning that are actually good in many situations. Trusting a person to tell you the truth, saying Oh, I dont quite understand this, I should figure this out those are great instincts, in certain contexts. But those instincts could lead you to treat a GPT-3 online date as if theyre a human being.

I asked if he was advocating for a kind of new civics education in our K-12 curriculum, and while he said that would be ideal, DeDeo added that we as adults dont even know what the hell is going on, either.

There are no adults in the room, he said. So I think the conversation is partly, What do we teach a 12-year-old?, but its also a philosophical problem for 21-year-olds and 31-year-olds and 41-year-olds.

But DeDeo is optimistic about solving that problem, pointing to the success of the university system and comparing the internet to a gigantic, anarchic community college.

Universities work; they really help people think better and enable science and philosophy to thrive, he argued. One thing that enables their success is the idea of membership and belonging, and that being a member of the system requires you to submit to certain obligations for example, epistemic humility, which requires you to go through the peer review process even if you really, really think youre right.

The thing we know doesnt work is Facebook, he said, describing it as this kind of massively authoritarian state where you have no sort of freedom of assembly within it. So its probably too large and the whole thing is designed to prevent people from self-governing.

Twitter, he said, is a little bit better. But for some of the best examples, DeDeo suggests looking at Reddit and Wikipedia.

Reddit enables a great deal of institution building within a subreddit. Theres very creative stuff that happens. And, he added, Reddit is surviving, even thriving; its growth hasnt leveled off. Wikipedia, at least early on, was also a great example of a community creating institutional structures that enabled it to thrive.

These institutions help detect fake content because gaining social status on Wikipedia or a subreddit is like the worlds greatest CAPTCHA, explained DeDeo. You have to do things that are so fundamentally human.

Something like this played out on Twitter recently, when a flood of apparently fake Amazon employee accounts were created and began tweeting praise of Amazon and making anti-union comments. Twitter users quickly caught on to telltale signs of computer-generated faces, such strange effects around a persons hair or glasses that had frames with an inconsistent style. Many at first assumed the accounts were created by Amazon, but more investigation shed doubt on that, and Amazon told a The New York Times reporter that it was not affiliated with the accounts. The accounts have since been suspended by Twitter for misleading information about the identity of the account holders.

Amid all the concerns about the problems that this technology could cause, DeDeo said he sees at least one upside: a more critical examination of human-generated content that, like bot-generated content, has no real depth.

Theres a lot of not-thinking that human beings do, said DeDeo, who, as a professor, has read his share of formulaic essays. Theres a lot of things people say that sound smart but actually have zero content. Theres a lot of articles that people write that are meaningless. GPT-3 can imitate those to perfection.

By exposing that, he said, clever algorithms like GPT-3 makes us aware of the extent to which things that look like theres a brain behind them have no brain. It disenchants us.

As machines learn to do more things, in other words, it forces us to do some deep thinking about what makes us different what makes us human.

We might think, Well, this is this awful dystopia where machines can write ad copy, said DeDeo. Well, maybe it turns out that writing ad copy is not what it means to be human.

More on AI: You Have No Idea What Artificial Intelligence Really Does

As a Futurism reader, we invite you join the Singularity Global Community, our parent companys forum to discuss futuristic science & technology with like-minded people from all over the world. Its free to join, sign up now!

Original post:
Professor Warns of "Nightmare" Bots That Prey on Vulnerable People - Futurism

Related Posts