Googles AI chatbotsentient and similar to a kid that happened to know physicsis also racist and biased, fired engineer contends – Fortune

Posted: July 31, 2022 at 9:13 pm

A former Google engineer fired by the company after going public with concerns that its artificial intelligence chatbot is sentient isnt concerned about convincing the public.

He does, however, want others to know that the chatbot holds discriminatory views against those of some races and religions, he recently told Business Insider.

The kinds of problems these AI pose, the people building them are blind to them, Blake Lemoine said in an interview published Sunday, blaming the issue on a lack of diversity in engineers working on the project.

Theyve never been poor. Theyve never lived in communities of color. Theyve never lived in the developing nations of the world. They have no idea how this AI might impact people unlike themselves.

Lemoine said he was placed on leave in June after publishing transcripts between himself and the companys LaMDA (language model for dialogue applications) chatbot, according to TheWashington Post. The chatbot, he told The Post, thinks and feels like a human child.

If I didnt know exactly what it was, which is this computer program we built recently, Id think it was a 7-year-old, 9-year-old kid that happens to know physics, Lemoine, 41,told the newspaper last month, adding that the bot talked about its rights and personhood, and changed his mind about Isaac Asimovs third law of robotics.

Among Lemoines new accusations to Insider: that the bot said lets go get some fried chicken and waffles when asked to do an impression of a Black man from Georgia, and that Muslims are more violent than Christians when asked about the differences between religious groups.

Data being used to build the technology is missing contributions from many cultures throughout the globe, Lemonine said.

If you want to develop that AI, then you have a moral responsibility to go out and collect the relevant data that isnt on the internet, he told Insider. Otherwise, all youre doing is creating AI that is going to be biased towards rich, white Western values.

Google told the publication that LaMDA had been through 11 ethics reviews, adding that it is taking a restrained, careful approach.

Ethicists and technologists have reviewed Blakes concerns per our AI principles and have informed him that the evidence does not support his claims, a company spokesperson told The Post last month.

He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).

Sign up for theFortune Features email list so you dont miss our biggest features, exclusive interviews, and investigations.

More:

Googles AI chatbotsentient and similar to a kid that happened to know physicsis also racist and biased, fired engineer contends - Fortune

Related Posts