Unpacking AI: "an exponential disruption" with Kate Crawford: podcast and transcript – MSNBC

View this graphic on msnbc.com

You might be feeling that artificial intelligence is starting to seem a bit like magic. Our guest this week points out that AI, once the subject of science fiction, has seen the biggest rise of any consumer technology in history and has outpaced the uptake of TikTok, Instagram and Facebook. As we see AI becoming more of an everyday tool, students are even using chatbots like ChatGPT to write papers. While automating certain tasks can help with productivity, were starting to see more examples of the dark side of the technology. How close are we to genuine external intelligence? Kate Crawford is an AI expert, research professor at USC Annenberg, honorary professor at the University of Sydney and senior principal researcher at Microsoft Research Lab in New York City. Shes also author of Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Crawford joins WITHpod to discuss the social and political implications of AI, exploited labor behind its growth, why she says its neither artificial nor intelligent, climate change concerns, the need for regulation and more.

Note: This is a rough transcript please excuse any typos.

Kate Crawford: We could turn to OpenAI's own prediction here, which is that they say 80 percent of jobs are going to be automated in some way by these systems. That is a staggering prediction.

Goldman Sachs just released a report this month saying 300 jobs in the U.S. are looking at, you know, very serious forms of automation impacting what they do from day-to-day. So, I mean, it's staggering when you start to look at these numbers, right?

So, the thing that I think is interesting is to think about this historically, right? We could think about the Industrial Revolution. It takes a while to build factory machinery and train people on how things work.

We could think about the transformations that happened in the sort of early days of the personal computer. Again, a slow and gradual rollout as people began to incorporate this technology. The opposite is happening here.

Chris Hayes: Hello and welcome to "Why Is This Happening?" with me, your host, Chris Hayes. There's a famous Arthur C. Clarke quote that I think about all the time. He was a science fiction writer and futurist and he wrote a book called "Profiles of the Future: An Inquiry into the Limits Possible" and this quote, which you probably have caught at one point or another is that, "Any sufficiently advanced technology is indistinguishable from magic."

And there's something profound about that. I remember the first time that, like, I saw Steve Jobs do the iPhone presentation. And then, the first one I held in my hand, it really did feel like magic. It felt like a thing that formerly wasn't possible, that I knew what the sort of laws of physics and technology were and this thing came along and it seemed to break them, so it felt like magic.

I remember feeling that way the first time that I really started to get on the graphical version of the internet. Even before that when I got on the first version of the internet. Like, oh, I have a question about a thing. You know, this baseball player Rod Carew, what did he hit in his rookie season? Right away, right? Magic. Magically, it appears in front of me.

And I think a lot of people have been having the feeling about AI recently. There's a bunch of new, sort of public-facing, machine learning, large language model pieces of software. One is ChatGPT, which I've been messing around with.

There's others for images. One called Midjourney and a whole bunch of others. And youve probably seen the coverage of this because it seems like in the last two months it's just gone from, you know, nowhere and people talk about AI and the algorithm machine learning tool, like, holy smokes.

And I got to say, like, we're going to get into the ins and outs of this today. But at the sort of does it feel like magic level, like, it definitely feels like magic to me.

I went to ChatGPT. I was messing around with it. I told it to write a standup comedy routine in the first person of Ulysses S. Grant about the Siege of Vicksburg using, like, specific details from the battle and it came back with, like, you know, I had to hide my soldiers the way I hide the whiskey from my wife, which is like, you know, he was, you know, notoriously had a drinking problem although tended to not around his wife. So, it was, like, slightly off that way.

But it was like a perfectly good standup routine about the Siege of Vicksburg in the first person of Ulysses S. Grant, and it was done in five seconds. Obviously, we're going to get into all sorts of, you know, I don't think it's going to be like taking over for us, but the reason it felt like magic to me is I know enough about computers and the way they work that I can think through like when my iPhone's doing something, when I'm swiping, I can model what's happening.

Like, there's a bunch of sensors in the actual phone. Those sensors have a set of programming instructions to receive the information of a swipe and then compare it against a set of actions and figure out which one is closest to and then do whatever the command is.

And, you know, I've programmed before, and I can reason out what it's doing. I can reason out what, like, my car is doing. I understand basically how an internal combustion engine works and, you know, the pistons. And I just have no idea what the hell is happening inside this thing that when I told it to do this, it came back with something that seemed like the product of human intelligence. I know it's not. We're going to get into all of it, but it's like it does seem to me like a real step change.

You know, a lot of people feel that way. Now, it so happens that this is something that I studied as an undergraduate and thought a lot about. And there's a long literature about artificial intelligence and human intelligence and we're going to get into all that today.

But because this is so front-of-mind, because this is such an area of interest for me, I'm really delighted to have on today's program Kate Crawford. This is Kate Crawford's lifes work. She's an artificial intelligence expert. She studies the social and political implications of AI.

She's a Research Professor at USC Annenberg, Honorary Professor at University of Sydney, Senior Principal Researcher at Microsoft Research Lab in New York City.

She's the author of "Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence." A lot of the things that I think have exploded onto public consciousness in the last few months have been the subject of work that she's been thinking about and doing for a very long time.

So, Kate, it's great to have you in the program.

Kate Crawford: Thanks for having me, Chris.

Chris Hayes: Does it feel like magic to you?

Kate Crawford: I'll be honest. There is definitely the patina of magic. There's that feeling of how is this happening. And to some degree, you know, I've been taken aback at the speed by which we've gotten here. I think anybody who's been working in this field for a long time will tell you the same thing.

Chris Hayes: Oh, really? This feels like a step change to you --

Kate Crawford: Oh, yeah.

Chris Hayes: -- like we're in a new --

Kate Crawford: Yeah. This feels like an inflection point, I would say, even bigger than a step function change. We're looking at --

Chris Hayes: Right.

Kate Crawford: -- a shift that I think is pretty profound and, you know, a lot of people use the iPhone example or the internet example. I like to go even further back. I like to think about the invention of artificial perspective, so we can go back into the 1400s where you had Alberti outline a completely different way of visualizing space, which completely transformed art and architecture and how we understood the world that we lived in.

You know, it's been described as a technology that shifted the mental and material worlds of what it is to be alive. And this is one of those moments where it feels like a perspectival shift that can feel magic. But I can assure you, it is not magic and thats --

Chris Hayes: No, I know --

Kate Crawford: -- where it gets interesting.

Chris Hayes: OK. I know it's not. I'm just being clear. Obviously, I know it's not magic. And also, I actually think the Arthur C. Clarke quote is interesting because there's two different meanings, right?

So, it feels like magic in the sense of, like, things that are genuine magic, right, that in a fantastical universe, they're miracles, right? Or it feels like magic in that, like, when you're around an incredible magician, you know that the laws of physics haven't been suspended but it sure as heck feels like it, right?

Kate Crawford: Oh, yeah.

Chris Hayes: And that's how this feels to me. Like, I understand that this is just, you know, a probabilistic large language learning model that then we'll get into how this is working. So, I get that.

But it sure as heck on the outcome line, you know, feels like something new. The perspectival shift is a really interesting idea. Why does that analogy draw you?

Kate Crawford: Well, let's think about these moments of seeming magic, right? So, there is just decades of examples of this experience. And in fact, we could go all the way back to the man who invented the first chatbot. This is Joseph Weizenbaum. And in the 1960s when he's at MIT, he creates a system called ELIZA. And if you're a person of a certain age, you may remember when ELIZA came out. It's really simple, kind of almost set of scripts that will ask you questions and elicit responses and essentially have a conversation with you.

So, writing in (ph) the 1970s, Weizenbaum was shocked that people were so easily taken in by this system. In fact, he uses a fantastic phrase around this idea that there is this powerful delusional thinking that is induced in otherwise normal people the minute you put them in front of a chatbot.

We assume that this is a form of intelligence. We assume that the system knows more than it does. And, you know, the fact that he captured that in this fantastic book called "Computer Power and Human Reason" back in 1978, I think, hasnt changed that that phenomenon, when we open up ChatGPT, you really can get that sense of, OK, this is a system that really feels like I'm talking to, at least if not a person, a highly-evolved form of computational intelligence.

And I think what's interesting about this perspectival shift is that, honestly, this is a set of technologies that have been pretty well known and understood for some time. The moment of change was the minute that OpenAI put it into a chat box and said, hey, you can have a conversation with a large language model.

That's the moment people started to say this could change every workplace, particularly white-collar workplaces. This could change the whole way that we get information. This could change the way we understand the world because this system is giving you confident answers that can feel extremely plausible even when they make mistakes, which they--

Chris Hayes: Yes.

Kate Crawford: -- frequently do.

Chris Hayes: So, I mean, part of that, too, is like, you know, humans see faces in all kinds of places where there arent faces, right? We project inner lives onto our pets. You know, we have this drive to mentally model other consciousnesses, partly because of the intensely inescapable social means by which we evolved.

So, part of it is in the same way that magicians taking advantage of certain parts of our perceptual apparatus, right, like we're easily distracted by, like, loud motions, right? It's doing that here with our desire to impute consciousness in the same way that, like, we have a whole story about what's going on in a dog's mind when it gets out into the park.

Kate Crawford: Exactly.

Chris Hayes: But, like, I'm not sure it's correct.

Kate Crawford: That is it. And I actually think the magician's trick analogy is the right one here because it operates on two levels. First, we're contributing half of the magic by bringing those, you know, anthropomorphic assumptions into the room and by playing along.

We are literally training the AI model with our responses. So, when it says something and we say, oh, that's great. Thanks. Could I have some more? Thats a signal to the system this was the correct answer.

If you say, oh, that doesn't seem to match up, then it takes that as a negative --

Chris Hayes: Right.

Kate Crawford: -- signal. So, we are literally training these systems with our own intelligence. But there's another way we could think about this magician's trick because while this is happening and while our focus is on, oh, exciting LLMs, there's a whole other set of political and social questions that I think we need to be asking that often get deemphasized.

Chris Hayes: There's a few things here. There's the tech, there's the kind of philosophy, and then there's the, like, political and social implication.

So, just start on the tech. Let's go back to the chatbot you're talking about before, ELIZA. So, there's a bunch of things happening here in a chatbot like ChatGPT that are worth breaking down.

The first is just understanding natural language and, you know, I did computer science as an undergraduate and philosophy and philosophy of mind and some linguistics when I was an undergraduate 25 years ago. And at that time, like, natural language processing was a huge unsolved problem.

You know, we all watched "Star Trek". Computer, give me this. And it's like getting that computer understand a simple sentence is actually, like, wildly complex as a computational problem. We all take it for granted, but it seems like even before you get into what it's giving you back, I mean, now, it's embedded in our lives, Siri, all this stuff.

Like how did we crack that? Is there a layperson's way to explain how we cracked natural language processing?

Kate Crawford: I love the story of the history of how we got here because it gives you a real sense of how that problem has been, if not cracked, certainly seriously advanced. So, we could go back to the sort of prehistory of AI. So, I think sort of 1950s, 1960s.

The idea of artificial intelligence then was something called knowledge-based AI or an expert systems approach. The idea of that was that to get a computer to understand language, you had to teach it to understand linguistic principles, high-level concepts to effectively understand English like the way you might teach a child to understand English by thinking about the principles and thinking about, you know, here's why we use this sort of phrasing, et cetera.

Then something happens in around the 1970s and early 1980s, a new lab is created at IBM, the continuous-speech recognition lab, the CSR lab. And this lab is fascinating because a lot of key figures in AI are there, including Robert Mercer who would later become famous as the, shall we say, very backroom-operator billionaire who funded people like Bannon and the Trump campaign.

Chris Hayes: Yup.

Kate Crawford: Yes, and certainly, the Brexit campaign.

Chris Hayes: Yup.

Kate Crawford: So, he was one of the members of this lab that was headed by Professor Jelinek, and they had this idea. They said instead of teaching computers to understand, let's just teach them to do pattern recognition at scale.

Essentially, we could think about this as the statistical turn, the moment where it was less about principles and more about patterns. So, how do you do it? To teach that kind of probabilistic pattern recognition, you just need data. You need lots and lots and lots of linguistic data, just examples.

And back then, even in the, you know, 1980s, it was hard to get a corpus of data big enough to train a model. They tried everything. They tried patents. They tried, you know, IBM technical manuals, which, funnily enough, didn't sound like human speech. They tried children's books.

And they didn't get a corpus that was big enough until IBM was actually taken to court. This was like a big antitrust case where it went for years. They had, like, a thousand witnesses called. And in this case, this produces the corpus that they used to train their model. Like honestly, you couldnt make this stuff up. Its wild (ph).

Chris Hayes: Is that right?

Kate Crawford: Oh, absolutely. So, they have a breakthrough which is that it is all about scale. And so interestingly --

Chris Hayes: Right.

Kate Crawford: -- Mercer has this line, you know, which is fantastic. There's a historian of science, Tsao-Cheng Lee (ph) who's written about this moment. But, you know, Mercer says, it was one of the rare moments of government being useful despite itself. That was how --

Chris Hayes: Boo.

Kate Crawford: -- he justified this case, right?

So, we see this changed towards basically it's all about data. So, then we have the years of the internet. Think about, you know, the early 2000s. Everyone's doing blogs, social media appears, and this is just grist to the mill. You can scrape and scrape and scrape and create larger and larger training data sets.

So, that's basically what they call these, foundational data sets, which are used to see these patterns. So, effectively, LLMs are advanced pattern recognizers that do not understand language, but they are looking for, essentially, patterns and relationships between the text that theyve been trained on, and they use this to essentially predict the next word in a sentence. So, that's what they're aimed to do.

Chris Hayes: This statistical turn is such an important conceptual point. I just want to stay on it because I think this, like, really helped. And this turn happened before I was sort of interested in natural language processing. But when we were talking about natural language processing, we're still talking in this old model, right?

Well, you teach kids these rules, right, and you teach them or if you learn a second language, like, you learn verb conjugation, right? And you're running them through these rules, like, OK, that's a first person. There's this category called first person. There's a category called verb then a conjugate. There's category of conjugation. One plus one plus one equals three. That gives me, you know, yo voy (ph). OK.

So, thats this sort of principle, rule-based way of sort of understanding language and natural language processing. So, the statistical turn says throw all that out. Let's just say if someone says thanks what's likely to be the next word?

And you see this in the Gmail auto complete.

Kate Crawford: Yup.

Chris Hayes: When you say thanks and it will light up so much. It's just that thanks so much goes together a lot. So, when you put in thanks, it's like pretty good chance it's going to be so much.

And that general principle of if you run enough data and you get enough probabilistic connections between this and that word at scale, is how you get Ulysses S. Grant doing a joke about Vicksburg and hiding his troops the way he hides whiskey from his wife.

Kate Crawford: Exactly. And you could think about all of the words and that joke is being in a kind of big vector space or word cloud where you'd have Ulysses S. Grant, you'd have whiskey, you'd have soldiers, and you can kind of think about the ways in which they would be related.

And the funny thing is trying to write jokes with GPT, some of the time, it's really good and some of the time, it's just not funny at all because it's not --

Chris Hayes: Right. Sure.

Kate Crawford: -- coming from a basis of understanding humor or language.

Chris Hayes: No.

Kate Crawford: It's essentially doing this very large word association game.

Chris Hayes: Right. OK. So, I understand this principle. Like I get it. It's a probabilistic model that is trained on a ton of data and because it's trained on so much data and because it's using a cycle amount of processing power.

Kate Crawford: Oh, yes.

Chris Hayes: Like a genuinely crazy and, like, expensive and carbon intensive. So like, it's like running a car like a huge Mack truck, right?

Kate Crawford: Oh, yeah.

Chris Hayes: It's working its butt off to give me this, my dumb little Vicksburg joke. So, like, I get that intuitively, but maybe, like, if we could just go to the philosophy place, its like, OK, it doesn't understand. But then we're at this question of, like, all right, well what does understanding mean, right?

Kate Crawford: Right.

Chris Hayes: And this is where we start to get into this sort of philosophical AI question. And there's a long line here. There's Alan Turing's Turing test, which means we should explain to folks who don't know that. There's John Searle's Chinese box example, which we should also probably take a second.

But basically, for a long time, this question of, like, what does understanding mean? And if you encountered an intelligence that acted as if it were intelligent, at what point would you get to say it's intelligent without peering into what it's doing on the inside to produce the thing that makes it seem intelligent.

And the Turing test, is Alan Turing, the brilliant British mathematician, basically says, if you can interact with a chatbot that fools you, that's intelligence. And it just feels like, OK, well, ChatGPT, I think, is passing it. It feels like it passes the Turing test at least in some circumstances, yes?

More:

Unpacking AI: "an exponential disruption" with Kate Crawford: podcast and transcript - MSNBC

Related Posts

Comments are closed.