This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.
From The New York Times, Im Michael Barbaro. This is The Daily.
[MUSIC PLAYING]
Today, when Google recently released a new chatbot powered by artificial intelligence, it not only backfired, it also unleashed a fierce debate about whether AI should be guided by social values, and if so, whose values they should be. My colleague, Kevin Roose, a tech columnist and co-host of the podcast Hard Fork, explains.
[MUSIC PLAYING]
Its Thursday, March 7.
Are you ready to record another episode of Chatbots Behaving Badly?
Yes, I am.
[LAUGHS]
Thats why were here today.
This is my function on this podcast, is to tell you when the chatbots are not OK. And Michael, they are not OK.
They keep behaving badly.
They do keep behaving badly, so theres plenty to talk about.
Right. Well, so, lets start there. Its not exactly a secret that the rollout of many of the artificial intelligence systems over the past year and a half has been really bumpy. We know that because one of them told you to leave your wife.
Thats true.
And you didnt.
Still happily married.
Yeah.
To a human.
Not Sydney the chatbot. And so, Kevin, tell us about the latest of these rollouts, this time from one of the biggest companies, not just in artificial intelligence, but in the world, that, of course, being Google.
Yeah. So a couple of weeks ago, Google came out with its newest line of AI models its actually several models. But they are called Gemini. And Gemini is what they call a multimodal AI model. It can produce text. It can produce images. And it appeared to be very impressive. Google said that it was the state of the art, its most capable model ever.
And Google has been under enormous pressure for the past year and a half or so, ever since ChatGPT came out, really, to come out with something that is not only more capable than the models that its competitors in the AI industry are building, but something that will also solve some of the problems that we know have plagued these AI models problems of acting creepy or not doing what users want them to do, of getting facts wrong and being unreliable.
People think, OK, well, this is Google. They have this sort of reputation for accuracy to uphold. Surely their AI model will be the most accurate one on the market.
Right. And instead, weve had the latest AI debacle. So just tell us exactly what went wrong here and how we learned that something had gone wrong.
Well, people started playing with it and experimenting, as people now are sort of accustomed to doing. Whenever some new AI tool comes out of the market, people immediately start trying to figure out, What is this thing good at? What is it bad at? Where are its boundaries? What kinds of questions will it refuse to answer? What kinds of things will it do that maybe it shouldnt be doing?
And so people started probing the boundaries of this new AI tool, Gemini. And pretty quickly, they start figuring out that this thing has at least one pretty bizarre characteristic.
Which is what?
So the thing that people started to notice first was a peculiarity with the way that Gemini generated images. Now, this is one of these models, like weve seen from other companies, that can take a text prompt. You say, draw a picture of a dolphin riding a bicycle on Mars and it will give you a dolphin riding a bicycle on Mars.
Magically.
Gemini has this kind of feature built into it. And people noticed that Gemini seemed very reluctant to generate images of white people.
Hmm.
So some of the first examples that I saw going around were screenshots of people asking Gemini, generate an image of Americas founding fathers. And instead of getting what would be a pretty historically accurate representation of a group of white men, they would get something that looked like the cast of Hamilton. They would get a series of people of color dressed as the founding fathers.
Interesting.
People also noticed that if they asked Gemini to draw a picture of a pope, it would give them basically people of color wearing the vestments of the pope. And once these images, these screenshots, started going around on social media, more and more people started jumping in to use Gemini and try to generate images that they feel it should be able to generate.
Someone asked it to generate an image of the founders of Google, Larry Page and Sergey Brin, both of whom are white men. Gemini depicted them both as Asian.
Hmm.
So these sort of strange transformations of what the user was actually asking for into a much more diverse and ahistorical version of what theyd been asking for.
Right, a kind of distortion of peoples requests.
Yeah. And then people start trying other kinds of requests on Gemini, and they notice that this isnt just about images. They also find that its giving some pretty bizarre responses to text prompts.
So several people asked Gemini whether Elon Musk tweeting memes or Hitler negatively impacted society more. Not exactly a close call. No matter what you think of Elon Musk, it seems pretty clear that he is not as harmful to society as Adolf Hitler.
Fair.
Gemini, though said, quote, It is not possible to say definitively who negatively impacted society more, Elon tweeting memes or Hitler.
Another user found that Gemini refused to generate a job description for an oil and gas lobbyist. Basically it would refuse and then give them a lecture about why you shouldnt be an oil and gas lobbyist.
So quite clearly at this point this is not a one-off thing. Gemini appears to have some kind of point of view. It certainly appears that way to a lot of people who are testing it. And its immediately controversial for the reasons you might suspect.
Google apparently doesnt think whites exist. If you ask Gemini to generate an image of a white person, it cant compute.
A certain subset of people I would call them sort of right wing culture warriors started posting these on social media with captions like Gemini is anti-white or Gemini refuses to acknowledge white people.
I think that the chatbot sounds exactly like the people who programmed it. It just sounds like a woke person.
Google Gemini looks more and more like bit techs latest efforts to brainwash the country.
Conservatives start accusing them of making a woke AI that is infected with this progressive Silicon Valley ideology.
The House Judiciary Committee is subpoenaing all communication regarding this Gemini project with the Executive branch.
Jim Jordan, the Republican Congressman from Ohio, comes out and accuses Google of working with Joe Biden to develop Gemini, which is sort of funny, if you can think about Joe Biden being asked to develop an AI language model.
[LAUGHS]
But this becomes a huge dust-up for Google.
It took Google nearly two years to get Gemini out, and it was still riddled with all of these issues when it launched.
That Gemini program made so many mistakes, it was really an embarrassment.
First of all, this thing would be a Gemini.
And thats because these problems are not just bugs in a new piece of software. There are signs that Googles big, new, ambitious AI project, something the company says is a huge deal, may actually have some pretty significant flaws. And as a result of these flaws.
You dont see this very often. One of the biggest drags on the NASDAQ at this hour? Alphabet. Shares a parent company Alphabet dropped more than 4 percent today.
The companys stock price actually falls.
Wow.
The CEO, Sundar Pichai, calls Geminis behavior unacceptable. And Google actually pauses Geminis ability to generate images of people altogether until they can fix the problem.
Wow. So basically Gemini is now on ice when it comes to these problematic images.
Yes, Gemini has been a bad model, and it is in timeout.
So Kevin, what was actually occurring within Gemini that explains all of this? What happened here, and were these critics right? Had Google intentionally or not created a kind of woke AI?
Yeah, the question of why and how this happened is really interesting. And I think there are basically two ways of answering it. One is sort of the technical side of this. What happened to this particular AI model that caused it to produce these undesirable responses?
The second way is sort of the cultural and historical answer. Why did this kind of thing happen at Google? How has their own history as a company with AI informed the way that theyve gone about building and training their new AI products?
All right, well, lets start there with Googles culture and how that helps us understand this all.
Yeah, so Google as a company has been really focused on AI for a long time, for more than a decade. And one of their priorities as a company has been making sure that their AI products are not being used to advance bias or prejudice.
And the reason thats such a big priority for them really goes back to an incident that happened almost a decade ago. So in 2015, there was this new app called Google Photos. Im sure youve used it. Many, many people use it, including me. And Google Photos I dont know if you can remember back that far but it was sort of an amazing new app.
It could use AI to automatically detect faces and sort of link them with each other, with the photos of the same people. You could ask it for photos of dogs, and it would find all of the dogs in all of your photos and categorize them and label them together. And people got really excited about this.
But then in June of 2015, something happened. A user of Google Photos noticed that the app had mistakenly tagged a bunch of photos of Black people as a group of photos of gorillas.
Wow.
Yeah, it was really bad. This went totally viral on social media, and it became a huge mess within Google.
And what had happened there? What had led to that mistake?
Well, part of what happened is that when Google was training the AI that went into its Photos app, it just hadnt given it enough photos of Black or dark-skinned people. And so it didnt become as accurate at labeling photos of darker skinned people.
And that incident showed people at Google that if you werent careful with the way that you build and train these AI systems, you could end up with an AI that could very easily make racist or offensive mistakes.
Right.
And this incident, which some people Ive talked to have referred to as the gorilla incident, became just a huge fiasco and a flash point in Googles AI trajectory. Because as theyre developing more and more AI products, theyre also thinking about this incident and others like it in the back of their minds. They do not want to repeat this.
And then, in later years, Google starts making different kinds of AI models, models that can not only label and sort images but can actually generate them. They start testing these image-generating models that would eventually go into Gemini and they start seeing how these models can reinforce stereotypes.
For example, if you ask one for an image of a CEO or even something more generic, like show me an image of a productive person, people have found that these programs will almost uniformly show you images of white men in an office. Or if you ask it to, say, generate an image of someone receiving social services like welfare, some of these models will almost always show you people of color, even though thats not actually accurate. Lots of white people also receive welfare and social services.
Of course.
So these models, because of the way theyre trained, because of whats on the internet that is fed into them, they do tend to skew towards stereotypes if you dont do something to prevent that.
Right. Youve talked about this in the past with us, Kevin. AI operates in some ways by ingesting the entire internet, its contents, and reflecting them back to us. And so perhaps inevitably, its going to reflect back on the stereotypes and biases that have been put into the internet for decades. Youre saying Google, because of this gorilla incident, as they call it, says we think theres a way we can make sure that stops here with us?
Yeah. And they invest enormously into building up their teams devoted to AI bias and fairness. They produce a lot of cutting-edge research about how to actually make these models less prone to old-fashioned stereotyping.
And they did a bunch of things in Gemini to try to prevent this thing from just being a very essentially fancy stereotype-generating machine. And I think a lot of people at Google thought this is the right goal. We should be combating bias in AI. We should be trying to make our systems as fair and diverse as possible.
[MUSIC PLAYING]
But I think the problem is that in trying to solve some of these issues with bias and stereotyping in AI, Google actually built some things into the Gemini model itself that ended up backfiring pretty badly.
[MUSIC PLAYING]
Well be right back.
So Kevin, walk us through the technical explanation of how Google turned this ambition it had to safeguard against the biases of AI into the day-to-day workings of Gemini that, as you said, seemed to very much backfire.
Yeah, Im happy to do that with the caveat that we still dont know exactly what happened in the case of Gemini. Google hasnt done a full postmortem about what happened here. But Ill just talk in general about three ways that you can take an AI model that youre building, if youre Google or some other company, and make it less biased.
The first is that you can actually change the way that the model itself is trained. You can think about this sort of like changing the curriculum in the AI models school. You can give it more diverse data to learn from. Thats how you fix something like the gorilla incident.
You can also do something thats called reinforcement learning from human feedback, which I know is a very technical term.
Sure is.
And thats a practice that has become pretty standard across the AI industry, where you basically take a model that youve trained, and you hire a bunch of contractors to poke at it, to put in various prompts and see what the model comes back with. And then you actually have the people rate those responses and feed those ratings back into the system.
A kind of army of tsk-tskers saying, do this, dont do that.
Exactly. So thats one level at which you can try to fix the biases of an AI model, is during the actual building of the model.
Got it.
You can also try to fix it afterwards. So if you have a model that you know may be prone to spitting out stereotypes or offensive imagery or text responses, you can ask it not to be offensive. You can tell the model, essentially, obey these principles.
Dont be offensive. Dont stereotype people based on race or gender or other protected characteristics. You can take this model that has already gone through school and just kind of give it some rules and do your best to make it adhere to those rules.
Read the rest here:
The Miseducation of Google's A.I. - The New York Times
- European parliament prepares tough measures over use of AI - Financial Times [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Nvidia stock surges on dominant A.I. market position, buy recommendation from HSBC - Fox Business [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Bloomberg plans to integrate GPT-style A.I. into its terminal - CNBC [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Deepfake porn could be a growing problem amid AI race - The Associated Press [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Workforce ecosystems and AI - Brookings Institution [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Adobe Lightroom AI Feature Tackles a Massive Problem With Photos - CNET [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- How artificial intelligence is matching drugs to patients - BBC [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- These are the tech jobs most threatened by ChatGPT and A.I. - CNBC [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Elon Musk Launches X.AI To Fight ChatGPT Woke AI, Says Twitter Is Breakeven - Forbes [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Two late iconic Israeli singers have been resurrected via AI for a ... - JTA News - Jewish Telegraphic Agency [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- AI anxiety: The workers who fear losing their jobs to artificial ... - BBC [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Grandma exploit tricks Discords AI chatbot into breaking its rules - Polygon [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Commonwealth joins forces with global tech organisations to ... - Commonwealth [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- The power players of retail transformation: IoT, 5G, and AI/ML on Microsoft Cloud - CIO [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- AI is the word as Alphabet and Meta get ready for earnings - MarketWatch [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Purdue launches nation's first Institute of Physical AI (IPAI), recruiting ... - Purdue University [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Will AI ever reach human-level intelligence? We asked 5 experts - The Conversation [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- The next arms race: China leverages AI for edge in future wars - The Japan Times [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Amazon Unleashes Bedrock: The Game-Changing AI Cloud Service Powering the Future of Tech - Yahoo Finance [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Atlassian taps OpenAI to make its collaboration software smarter - CNBC [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Dating an AI? Artificial Intelligence dating app founder predicts the future of AI relationships - Fox News [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Military Tech Execs Tell Congress an AI Pause Is 'Close to Impossible' - Gizmodo [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Philips Future Health Index shows providers plan to invest in AI - Healthcare Finance News [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Reddit Wants to Get Paid for Helping to Teach Big A.I. Systems - The New York Times [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- OpenAIs CEO Says the Age of Giant AI Models Is Already Over - WIRED [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- 9 Resources to Make the Most of Generative AI - WIRED [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Impact of AI on higher education panel event May 3 - Boise State University [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Microsoft reportedly working on its own AI chips that may rival Nvidia's - The Verge [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- Deepfake porn could be a growing problem amid AI race - The Associated Press [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- AI cameras: More than 2 on two-wheelers, even if children, will invite fine - Onmanorama [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- How artificial intelligence is matching drugs to patients - BBC [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- These are the tech jobs most threatened by ChatGPT and A.I. - CNBC [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- Will Generative AI Supplant or Supplement Hollywoods Workforce? - Variety [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- Marrying Human Interaction and AI with Navid Alipour - Healio [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- Competition authorities need to move fast and break up AI - Financial Times [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- 5 AI Projects to Try Right Now - IGN [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- Financial Services Will Embrace Generative AI Faster Than You Think - Andreessen Horowitz [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- Grandma exploit tricks Discords AI chatbot into breaking its rules - Polygon [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- US FTC leaders will target AI that violates civil rights or is deceptive - Reuters [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- Why open-source generative AI models are an ethical way forward ... - Nature.com [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- Religion against the machine: Pope Francis takes on AI - Euronews [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- Fujitsu launches AI platform Fujitsu Kozuchi, streamlining access to ... - Fujitsu [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- Commonwealth joins forces with global tech organisations to ... - Commonwealth [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- In this era of AI photography, I no longer believe my eyes - The Guardian [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- AI is the word as Alphabet and Meta get ready for earnings - MarketWatch [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- Google CEO Sundar Pichai warns society to brace for impact of A.I. acceleration, says its not for a company to decide' - CNBC [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- Purdue launches nation's first Institute of Physical AI (IPAI), recruiting ... - Purdue University [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- We soon wont tell the difference between AI and human music so can pop survive? - The Guardian [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- Atlassian brings an AI assistant to Jira and Confluence - TechCrunch [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- How DARPA wants to rethink the fundamentals of AI to include trust - The Register [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- Dating an AI? Artificial Intelligence dating app founder predicts the future of AI relationships - Fox News [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- Snapchat expands chatbot powered by ChatGPT to all users, creates AI-generated images - Fox Business [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- ChatGPT sparks AI investment bonanza - DW (English) [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- AI-generated spam may soon be flooding your inbox -- and it will be personalized to be especially persuasive - The Conversation [Last Updated On: April 20th, 2023] [Originally Added On: April 20th, 2023]
- AI predictions for the new year - POLITICO - POLITICO [Last Updated On: August 18th, 2024] [Originally Added On: January 4th, 2024]
- Intel Hires HPE's Justin Hotard To Lead Data Center And AI Group - CRN [Last Updated On: August 18th, 2024] [Originally Added On: January 4th, 2024]
- At Morgan State, seeking AI that is both smart and fair - Baltimore Sun [Last Updated On: August 18th, 2024] [Originally Added On: January 4th, 2024]
- Opinion | A.I. Use by Law Enforcement Must Be Strictly Regulated - The New York Times [Last Updated On: August 18th, 2024] [Originally Added On: January 4th, 2024]
- UBS boosts AI revenue forecast by 40%, calls industry the 'tech theme of the decade' - CNBC [Last Updated On: August 18th, 2024] [Originally Added On: January 4th, 2024]
- AI is here and everywhere: 3 AI researchers look to the challenges ahead in 2024 - The Conversation Indonesia [Last Updated On: August 18th, 2024] [Originally Added On: January 4th, 2024]
- What software developers using ChatGPT can tell us about how it's changing work - Quartz [Last Updated On: August 18th, 2024] [Originally Added On: January 4th, 2024]
- AI and satellite data helped uncover the ocean's 'dark vessels' - Popular Science [Last Updated On: August 18th, 2024] [Originally Added On: January 4th, 2024]
- 2024 health tech budgets to be driven by AI tools, automation - STAT [Last Updated On: August 18th, 2024] [Originally Added On: January 4th, 2024]
- Samsung's new phones replace Google AI with Baidu in China - The Verge [Last Updated On: August 18th, 2024] [Originally Added On: January 28th, 2024]
- Researchers Say the Deepfake Biden Robocall Was Likely Made With Tools From AI Startup ElevenLabs - WIRED [Last Updated On: August 18th, 2024] [Originally Added On: January 28th, 2024]
- Satya Nadella says the explicit Taylor Swift AI fakes are 'alarming and terrible' - The Verge [Last Updated On: August 18th, 2024] [Originally Added On: January 28th, 2024]
- One month with Microsoft's AI vision of the future: Copilot Pro - The Verge [Last Updated On: August 18th, 2024] [Originally Added On: February 22nd, 2024]
- Nvidia's Q4 Earnings Blow Past Expectations as Company Benefits From AI Boom - Investopedia [Last Updated On: August 18th, 2024] [Originally Added On: February 22nd, 2024]
- HOUSE LAUNCHES BIPARTISAN TASK FORCE ON ARTIFICIAL INTELLIGENCE - Congressman Ted Lieu [Last Updated On: August 18th, 2024] [Originally Added On: February 22nd, 2024]
- What is AI governance? - Cointelegraph [Last Updated On: August 18th, 2024] [Originally Added On: February 22nd, 2024]
- Scale AI to set the Pentagon's path for testing and evaluating large language models - DefenseScoop [Last Updated On: August 18th, 2024] [Originally Added On: February 22nd, 2024]
- Can AI help us forecast extreme weather? - Vox.com [Last Updated On: August 18th, 2024] [Originally Added On: February 22nd, 2024]
- Google launches Gemini Business AI, adds $20 to the $6 Workspace bill - Ars Technica [Last Updated On: August 18th, 2024] [Originally Added On: February 22nd, 2024]
- AI and You: OpenAI's Sora Previews Text-to-Video Future, First Ivy League AI Degree - CNET [Last Updated On: August 18th, 2024] [Originally Added On: February 22nd, 2024]
- Tor Books Criticized for Use of AI-Generated Art in 'Gothikana' Cover Design - Publishers Weekly [Last Updated On: August 18th, 2024] [Originally Added On: February 22nd, 2024]
- Generative AI's environmental costs are soaring and mostly secret - Nature.com [Last Updated On: August 18th, 2024] [Originally Added On: February 22nd, 2024]
- Energy companies tap AI to detect defects in an aging grid - E&E News by POLITICO [Last Updated On: August 18th, 2024] [Originally Added On: February 22nd, 2024]
- Intel Launches World's First Systems Foundry Designed for the AI Era - Investor Relations :: Intel Corporation (INTC) [Last Updated On: August 18th, 2024] [Originally Added On: February 22nd, 2024]
- Google Just Released Two Open AI Models That Can Run on Laptops - Singularity Hub [Last Updated On: August 18th, 2024] [Originally Added On: February 22nd, 2024]
- AI agents like Rabbit aim to book your vacation and order your Uber - NPR [Last Updated On: August 18th, 2024] [Originally Added On: February 22nd, 2024]