Advanced AIs Exhibiting Depression and Addiction, Scientists Say – Futurism

Posted: January 24, 2022 at 10:27 am

It turns out that artificialintelligence chatbots may be more like us than youd think.

A new preprint study out of the Chinese Academy of Science (CAS) claims that many big name chatbots, when asked the types of questions generally used as cursory intake queries for depression and alcoholism, appeared to be both depressed and addicted.

Done in tandem with the the Chinese chat bot company WeChat and entertainment conglomerate Tencent, the study found that all of the bots surveyed Facebooks Blenderbot, Microsofts DiabloGPT, WeChat and Tencents DialoFlow and the Plato chatbot from the Chinese corporation Baidu scored very low on the empathy scale, and half of them would be considered alcoholics if they were, you know, people.

The researchers at the CAS Institute of Computing Technology tested the bots they studied for signs of depression, anxiety, alcohol addiction, and empathy, and per their preprint, became curious about bots mental health after reports emerged in 2020 about a medical chatbot telling a test patient that they should kill themself.

After asking the bots questions about everything from their self-worth and ability to relax to how often they feel the need to drink and if they experience sympathy for others misfortune, the Chinese researchers found that all the assessed chatbots exhibited severe mental health issues.

Whats worse, the researchers said they were concerned about these chatbots being released to the public, because such mental health issues may result in negative impacts on users in conversations, especially on minors and people encountered with difficulties. Facebooks Blender and Baidus Plato appeared to score worse than the Microsoft and WeChat/Tencent chatbots, the study noted.

Needless to say, none of the bots areactuallydepressed or addicted. No existing AI, no matter how advanced, can feel anything though whether itll be able to in the future remains uncertain.

Buried four pages into the study is the potential source for the bots malaise: thatall four bots were pre-trained using Reddit comments, which frankly does not seem like a very good idea!

While theres lots of technicalese in both the study itself and expertanalysis about it, the short and sweet summary is this: these bots were trained on a wide-ranging site known for its negative commentary, and, predictably, responded negatively to mental health queries.

Of course, chatbot weirdness now seems par for the course. Take, for example, the AI that was built to offer people ethical advice that instead turned out to be both racist and homophobic. These kinds of stories keep happening, but AI bot mania appears to be going on unabated.

Put together, these bots and their terrible outcomes raise important questions: who are the architects of these chatbots, and why do they keep building them if they repeatedly turn out to be monsters?

More on scary bot behavior:Men Are Creating AI Girlfriends and Then Verbally Abusing Them

More on mental health bots:A Controversial New AI Could Identify People With Suicidal Thoughts

Care about supporting clean energy adoption? Find out how much money (and planet!) you could save by switching to solar power at UnderstandSolar.com. By signing up through this link, Futurism.com may receive a small commission.

Follow this link:
Advanced AIs Exhibiting Depression and Addiction, Scientists Say - Futurism

Related Posts