This Philly Journalist’s Podcast About Artificial Intelligence Will Scare the Hell Out of You – Philadelphia magazine

Posted: May 11, 2021 at 10:35 pm

Q&A

Slaughterbots. Predictive policing. Your computer suddenly showing you ads for something you were speaking about hours before. Malcolm Burnley tells us all about it.

Malcolm Burnley, co-host of the new AI Nation podcast about artificial intelligence and its dangers and benefits. Photograph by Erin OBrien

There are lots of (read: way too many) podcasts in this world. But one that has really caught our attention of late is the new podcast about artificial intelligence, AI Nation, a project of WHYY and Princeton University. (Listen here.) On the show, co-host Malcolm Burnley, a former Philly Mag staff writer, breaks the controversial and ubiquitous technology down in a way anybody can understand. We tracked down the 31-year-old Society Hill resident to learn more.

How many times in the course of a day is the average American encountering artificial intelligence?The entire experience of the Internet is so informed by AI. Autocorrect on our phones is one of the biggest examples. Our web searches. E-commerce. Things that pop up in our feeds. Since the Capitol breach, there has been a lot of conversation about how AI and certain algorithms point certain people in different political directions and down various rabbit holes. And then in the physical world, theres a lot more facial recognition going on than were aware of.

Give me an example that youve dissected on AI Nation.Predictive policing. One story on the show is about Nijeer Parks. He wound up in jail for 10 days based on a bad facial-recognition match. He didnt even match the physical description of the person they were looking for. There were so many red flags that warranted further consideration. But instead, he went to jail. There is no transparency with police departments that are using this technology, and we only find out about it if they mess up.

Frightening.For sure. The other facial-recognition stuff thats frightening are these companies like Clearview. Supposedly, close to a quarter of all Americans are in their database, and what they can do is run a photo through their system and tell you who the person is, what their online profiles are. Only clients of Clearview can get access. This can just be a photo somebody takes with an iPhone.

My sister still wont get EZPass because shes worried about the government tracking her. Yet she has an iPhone. The government is already onto her, right? She should just get the EZPass?[Laughs] I think its fair to say that companies and the government have her information. With the government, the pandemic response in South Korea was a glaring example. They required everyone to download an app and share data with the government. As dystopian as that sounds, they were able to do contact tracing at this hyper level. They didnt have to shut down their whole economy. They knew locations, financial transactions. They had closed-circuit footage. I talked to people in South Korea who said that when they were told to quarantine, if their cell phone didnt move for a few hours, someone would knock on their door to see if they had just left their phone at home so they could go out.

Okay, so my wife and I were speaking about a type of product while we were sitting on the couch. Nobody was searching for anything online. The next day, Im getting ads for the thing. No, we dont have Alexa. Please explain. Is this just a coincidence?Its definitely not a coincidence. Its probably your phone. Siri. Its happened to me, and its jarring and shocking. You probably gave some app on your phone access to your microphone. Who knows?

Right. Could be the phone. Could be the laptop. Maybe the cable remote that you can talk to. Its scary. Is there any regulatory body dealing with all of this? Some agency whose mission is to prevent us from suffering the fate of the astronauts in 2001 or humankind in Dr. Strangelove?No, but I believe there will be soon. There have already been hearings in Washington with U.S. Senators and Representatives asking deeply skeptical questions. There have been data breaches. Then things like the Capitol breach. I think some big thing will happen that will trigger a call for more regulation, the way that 9/11 triggered the creation of Homeland Security.

Was there some point when you were researching for AI Nation where you were just like, OMG, I cant believe theyre doing that? Like some drone that can scan my face and then shoot me with a laser?Theres a video online called Slaughterbots [embedded below] that is essentially what you just described. Its a fictional video put out there in 2017 by this group trying to regulate autonomous weapons. The capability exists now to have a tiny drone identify somebody by facial recognition and then kill them. Theres no evidence that its been used yet. The world of autonomous weapons is so, so scary to dive into.

But with drones, you actually have a human behind the trigger, right?Most examples are some mixture of AI and humans having to decide to pull the trigger. But one guest we had on AI Nation was Laura Nolan, who quit Google. She was an engineer and quit over this controversial project with the DoD called Project Maven, a kill-chain project that would get computers a lot closer to pulling the trigger themselves.

Just to be clear, though, there arent any drones doing this now. But there will be in the future.Theres a Turkish military company that makes something called the Kargu. Its much bigger than the slaughterbot you saw in that video. It can go out on its own and ID someone and kill them. The technology has been confirmed. We dont know that its ever been used.

But its not all bad, right? Artificial intelligence can help me, too.Absolutely. AI will really help us so much in a future pandemic. Well be able to end a pandemic years from now in a matter of weeks using AI.

AI is already causing and will inevitably cause a lot more inequity. The rich get richer. The poor get poorer.

Anything more practical, for right now?The most impressive example of AI that Ive found is regarding whats known as the protein folding problem. It was something that scientists were trying to solve for 75 years. Some scientists spent their entire careers trying to solve this problem. Well, AI has solved the problem. It was announced last year. How AI solved it was just incredible. The scientists had been using the laws of physics. The computer invented its own form of physics on the fly. This has major impacts on fighting a future pandemic and finding cures for diseases.

So a computer figured out a problem that countless scientists had worked on for the better part of a century, which raises an important question: What do we do with all of the people whose jobs have been taken by AI and whose jobs will be taken in the future by AI?Im so happy you brought this up. AI is already causing and will inevitably cause a lot more inequity. The jobs AI tends to take are low-wage jobs. AI will tend to replace women and people of color. The rich get richer. The poor get poorer. Just like in the Industrial Revolution, when inequity skyrocketed. I dont have the answer. And its scary to think of the economic impact and mental health impact of people not being able to work or having jobs lost. We need to make sure we design systems with those people in mind.

Okay, last thing. Some predictions. Drone deliveries en masse. A robotic car driving me to work. How far off is all this stuff in reality?AI is really good at looking at data, finding patterns, and predicting things, and less good at moving through the physical world. Ive talked with autonomous car experts, and they said the marketing around those vehicles has greatly outpaced the technology. Most people think were still at least a decade away from seeing self-driving cars regularly. And youre going to see drones repairing bridges and doing other tasks away from people more than seeing them coming through residential neighborhoods. What were going to see a lot more of are things like smart earrings and smart necklaces and other devices tracking our biometric data and sharing it with a lot more people. Your doctor. The government. Were going to see our data increasingly getting sucked up. AI can help improve things, and to do that, you have to let it run loose. But what we dont know yet is how much it will mess up things.

Excerpt from:

This Philly Journalist's Podcast About Artificial Intelligence Will Scare the Hell Out of You - Philadelphia magazine

Related Posts