5 real AI threats that make The Terminator look like Kindergarten Cop – The Next Web

Posted: September 16, 2021 at 6:46 am

It. Never. Fails. Every time an AI article finds its way to social media theres hundreds of people invoking the terrifying specter of SKYNET.

SKYNET is a fictional artificial general intelligence thats responsible for the creation of the killer robots from the Terminator film franchise. It was a scary vision of AIs future until deep learning came along and big tech decided to take off its metaphorical belt and really give us something to cry about.

At least the people fighting the robots in The Terminator film franchises get to face a villain they can see and shoot at. In real life, you cant punch an algorithm.

And that makes it difficult to explain why, based on whats happening now, the real future might be even scarier than the one from those killer robot movies.

Luckily, we have experts such as Kai Fu Lee and Chen Qiufan, whose new book, AI 2041: Ten Visions of our Future, takes a stab at predicting what the machines will do over the next two decades. And, based on this interview, theres some scary shit headed our way.

According to Lee and Qiufan, the biggest threats humans face when it comes to AI involve its influence, lack of accountability or explainability, its inherent and explicit bias, its use as a bludgeon against privacy, and, yes, killer robots but not the kind youre thinking of.

If were going to prioritize a list of existential threats to the human race, we should probably start with the worst of them all: social media.

Facebooks very existence is a danger to humanity. It represents a business entity with more power than the governing body of the nation in which its incorporated.

The US government has taken no meaningful steps to regulate Facebooks use of AI. And, for that reason, billions of humans across the planet are exposed to demonstrably harmful recommendation algorithms every day.

Facebooks AI has more influence over humankind than any other force in history. The social network has more active monthly users than Christianity.

It would be shortsighted to think decades of exposure to social networks, despite hundreds of thousands of studies warning us about the real harms, wont have a major impact on our species.

Whether in 10, 20, or 50 years, the evidence seems to indicate well live to regret turning our attention spans over to a mathematical entity thats dumber than a snail.

The next threat on our tour-de-AI-horrors is the fascinating world of anti-privacy technology and the nightmare dystopia were headed for as a species.

Amazons Ring is the perfect reminder that, for whatever reason, humankind is deeply invested in shooting itself in the foot at every possible opportunity.

If theres one thing almost every free nation on the planet agrees on, its that human beings deserve a modicum of privacy.

Ring doorbell cameras destroy that privacy and effectively give both the government and a trillion-dollar corporation a neighbors eye-view of everything thats happening in every neighborhood around the country.

The only thing stopping Amazon or the US government from exploiting the data in the buckets where all that Ring video footage is stored is their word.

If it ever becomes lucrative to use our data or sell it. Or a political shift gives the US government powers to invade our privacy that it didnt previously have, our data is no longer safe.

But its not just Amazon. Our cars will soon be equipped with cloud-connected cameras purported to watch drivers for safety reasons. We already have active microphones listening in all of our smart devices.

And were on the very cusp of mainstreaming brain-computer-interfaces. The path to wearables that send data directly from your brain to big techs servers is paved with good intentions and horrible AI.

The next generation of surveillance tech, wearables, and AI-companions might eradicate the idea of personal privacy all-together.

The difference between being the first result of a Google search or ending up at the bottom of the page can cost businesses millions of dollars. Search engines and social media feed aggregators can kill a business or sink a news story.

And nobody voted to give Google or any other companys search algorithms that kind of power, it just happened.

Now, Googles bias is our bias. Amazons bias determines which products we buy. Microsoft and Apples bias determine what news we read.

Our doctors, politicians, judges, and teachers use Google, Apple, and Microsoft search engines to conduct personal and professional business. And the inherent biases of each product dictate what they do and do not see.

Social media feeds often determine not just which news articles we read, but which news publishers were exposed to. Almost every facet of modern life is somehow promulgated via algorithmic bias.

In another 20 years, information could become so stratified that alternative facts no longer refer to those that diverge from reality, but those that dont reflect the collective truth our algorithms have decided on for us.

AI doesnt have to actually do anything to harm humans. All it has to do is exist and continue to be confusing to the mainstream. As long as developers can get away with passing off black box AI as a way to automate human decision-making, bigotry and discrimination will have a home in which to thrive.

There are certain situations where we dont need AI to explain itself.But when an AI is tasked with making a subjective decision, especially one that affects humans, its important we be able to know why it makes the choices it does.

Its a big problem when, for example, YouTubes algorithm surfaces adult content to childrens accounts because the developers responsible for creating and maintaining those algorithms have no clue why it happens.

But what if there isnt a better way to use black box AI? Weve painted ourselves into a corner almost every public-facing big tech enterprise is powered by black box AI, and almost all of it is harmful. But getting rid of it may prove even harder than extricating humanity from its dependence on fossil fuels and for the same reasons.

In the next 20 years, we can expect the lack of explainability intrinsic to black box AI to lie at the center of any number of potential catastrophes involving artificial intelligence and loss of human life.

The final and perhaps least dangerous (but most obvious) threat to our species as a whole is that of killer drones. Note, thats not the same thing as killer robots.

Theres a reason why even the US military, with its vast budget, doesnt have killer robots. And its because theyre pointless when you can just automate a tank or mount a rifle on a drone.

The real killer robot threat is that of terrorists gaining access to simple algorithms, simple drones, simple guns, and advanced drone-swarm control technology.

Perhaps the best perspective comes from Lee who, in a recent interview with Andy Serwer, said:

It changes the future of warfare because, between country and country, this can create havoc and damage, but perhaps, anonymously and people dont know who did the attack.

So its also quite different from nuclear arms race, where [the] nuclear arms race at least has deterrence built-in. That you dont attack someone for the fear of retaliation and annihilation.

But autonomous weapons might be doable as a surprise attack. And people might not even know who did it. So I think that is, from my perspective, the ultimate greatest danger that I can be a part of. And we need to be cautious and figure out how to ban or regulate it.

Originally posted here:

5 real AI threats that make The Terminator look like Kindergarten Cop - The Next Web

Related Posts