Neil Mackay’s Big Read: Why artificial intelligence will either be the saviour or exterminator of the human race – HeraldScotland

Posted: January 17, 2021 at 8:57 am

HERES how quickly, how dangerously, artificial intelligence is moving: one day after Brian Christian sat in his office, near Berkeley University in California, warning The Herald on Sunday about the imminent perils of a powerful AI being sent out onto social media by a rogue state to ramp up online hate and division, South Korea turned its worried attention to a character called Lee Luda.

Luda is just 20. She is a university student who gained 750,000 friends on Facebook in three weeks. The only problem is that Luda isnt a woman, or a student, or 20, or Korean. She is an Artificial Intelligence and she had to be removed from Facebook after causing outrage with attacks on sexual minorities and disabled people. What makes Luda even more troubling is that the AI has an almost identical name to a South Korean popstar Lee Lu-da. How long before theres a truly malevolent deep fake AI out there corrupting reality?

The Korean experience is a perfect example of what Christian, one of the worlds leading experts on AI, calls the alignment problem. How do we ensure AI creations match human norms and values? How do we prevent AI transgressing morality? How do we stop AI doing something dreadful in the real world?

Killer AI

THESE AIs have already killed people. One death occurred in Arizona when a woman wheeled her bike onto a road but didnt use a crossing. The AI of a passing self-driving car, Christian explains, wasnt properly programmed to understand that humans might appear on roads without using crossings, or might wheel bikes. So, the AI just drove over the confusing object and killed a human being.

Weve now reached a tipping point with AI which is why Christian has written his prescient new book, The Alignment Problem: How Can Machines Learn Human Values? Christian is uniquely qualified when it comes to warning us about the uncharted territory were entering with the crossover between human and machine intelligence. He is a computer scientist and philosopher by training, and currently scientific communicator-in-residence at the Simons Institute for the Theory of Computing at Berkeley University the metaphoric heart and soul of Silicon Valley. He is also an affiliate of the Centre for Information Technology Research in the Interest of Society, and the Centre for Human-Compatible AI. If there is anyone who understands the risks of humanitys grand experiment with AI, its Christian.

Dont make the mistake of thinking AI is some geeky oddity confined to the realms of sci-fi and self-driving cars. AI is in your life right now. All around the world, if you apply for a mortgage, or seek credit, AI decides yes or no. AI is inside the justice system, advising judges in America whether its safe to let a prisoner out on bail. Police and intelligence services use AI to surveil us.

AI advises Western military powers on drone targets. Politicians routinely use AIsometimes without even knowing it to make judgments about how they govern our lives. Doctors depend on AI to help with diagnoses. There are neural networks in your phone, sorting pictures of your partner into individual albums without you asking. AI is now part and parcel of modern life but were only in the foothills of where this technology might take us. The abilities of AI are accelerating at an astonishing rate already some AI is so human-like it mirrors how the brain uses the neurotransmitter chemical dopamine.

Horror story

Christian says we need to think about two old horror stories when we consider AI. First, theres The Sorcerers Apprentice the tale of a young magician who casts a spell on a broom, making it carry water for him. However, the young magician has no idea how to break the spell, so the broom/slave keeps carrying water until his house is flooded. Then theres The Monkeys Paw the story of bereaved parents who use an enchantment to bring their son, who died in a horrible accident, back to life. What returns from the cemetery, though, is beyond their worst nightmares.

Beware what you ask for, the stories warn and we need to be very careful what we ask AI, and the instructions we give an AI to carry out the tasks we assign it. For example, there are cases of AIs used in job hiring. The AI looks at a companys employee history and sees 80 per cent of past workers are male and white so the AI modifies its hiring policy to exclude most black people and women.

When AIs like Lee Luda are tasked with engaging in conversation, they have usually been fed the entire internet in order to understand how humans talk. Of course, humans online are fairly horrible so the AI just copies our worst excesses. AIs have captioned photographs of black people as gorillas because humans use such slurs online.

The wilful child

AI IS like a child, says Christian. Youre worried about it falling in with the wrong group of friends.

There is also the question of whether humans are even ready for such technology. Christian wonders if the leap forward promised by AI might decades from now be seen as revolutionary as the invention of agriculture which completely transformed humanity. It has been said that modern humans have Palaeolithic brains, medieval institutions and godlike powers AI makes that abundantly clear. Some of the greatest scientific minds have used the analogy of a foal and a stallion to explain humanitys limited emotional intelligence versus our technological prowess. The foal is our emotional intelligence barely able to totter, while our technological prowess is the stallion, galloping across the plains. Theres a serious mismatch and the foal needs to catch up.

Christians mission is to bring a sense of crisis to humanity over AIs civilisational risks and present-day misuses. This, he feels is the defining project of the next decade. Christian points out that he once had a conversation with Elon Musk, the tech tycoon. Musk asked him: Give me one good argument why we shouldnt be worried about AI? If AI unsettles Musk, we know were in trouble. Scientists in industry and academia are increasingly worried that AI has developed too fast for us to properly prepare for its dangers.

The incredible acceleration in the capacity of what machine-learning systems can do and the steady proliferation of these systems into the decision-making apparatus of our society makes the question of safety critical, Christian says.

Job destruction

Until recently, most concerns around AI have centred on job losses the robot replacing the human. AI is now clever enough to take on roles in creative industries. AIs can write sports reports basic articles about who scored and when. An AI can take a 100-page document about a companys finances and turn it into an accurate business report in seconds. Christian explains: You can say write me a five-paragraph essay about a Peruvian explorer encountering a tribe of unicorns in the Andes and itll just write stuff.

But jobs losses are simply one of the risks of the rise of AI. The idea of a truly powerful AI being harnessed for bad intent by some malevolent state is the stuff of genuine nightmares. Christian says well shortly see AIs on social media in a way which makes the Korean example seem positively benign. If we think the excesses of Twitter and Facebook in the Trump era are bad, we aint seen nothing yet.

Social media hell

How does our public discourse survive the ability to generate human level speech at scale just a firehose of internet comments? Christian asks. We are very, very close, he believes, to being able to place an AI in cyberspace which engages in a way that appears truly human, forever. Imagine a system that can advocate for a particular worldview political, corporate, religious, ethnic tirelessly, debating with billions of people 24 hours a day A tidal wave is coming, Christian says. That same AI could argue both sides of the same debate pro- and anti-Brexit, perhaps simultaneously.

Christian notes with irony that some AI systems used by social media giants were initially invented for video games. Were now being played.

Genocide by AI

Lets say one day we do crack the problem of aligning AI with human wishes that we discover the secret formula which negates the risk of a Sorcerers Apprentice scenario. Even if we did pull off that feat, whats to say that the human instructing the AI isnt themselves bad? A machine may be aligned to human wishes, but what if those human wishes were evil? What, asks Christian, if the wishes of the human commanding the AI were the creation of a religious ethno-state? Alignment is in the eye of the beholder. For some, the perfectly aligned AI might be capable of the perfect genocide.

There is no alignment problem, he says, if you want someone to get killed and the machine kills them thats aligned.

Redemption?

Christians vision is truly frightening at times, but he does see some possibility of hope. The very fact that AI seems to be opening our eyes to our own darkest side the way its exposing humanitys innate racism and sexism may act as a spur to deal with these moral failings. For example, ask some AIs to answer the logic question what is doctor minus man plus woman and youll be told nurse as if doctors are always men and nurses always women when the answer should, obviously, still be doctor. Think how often we refer to mankind. The machine just learns from us and responds accordingly.

AI holds up a mirror to ourselves, Christian says. Perhaps the shock of looking in that mirror will make us better as a species. At least thats the redemptive vision, he says. The bottom line is that AI wont change for the better unless we change it will be bad if were bad. And that question of change is vital. AI doesnt just have to align with human values today, but adapt its moral alignment as our own values change over time. Imagine if we fixed the alignment problem right now. Would people a century from now want to live by our standards? Youd hate to live in a world run by the aligned AI of the 15th century, Christian notes.

Its a hell of a thing were asking. In effect, were on the road to sentient machines, and we need to make them capable of responding to human emotions, needs and morality with pinpoint accuracy. It may never be possible and thats where the dystopia lies.

Utopia

The flip side of dystopia, of course, is utopia, and some have faith that AI could lead to a golden future. Christian ponders whether AI if its ever able to be harnessed correctly might one day help us raise the level of human happiness worldwide. Might it find a way to deal with the climate crisis? Tackle poverty? In effect, teach us to be better people.

He also speculates AI might give us a greater, more humble understanding of ourselves. At a certain point, Christian says, were going to transition to a world where people just accept that human minds are one type of mind among many. Might our realisation that machines are smarter, more powerful, than us cause humans to start treating the creatures of the Earth with more dignity and decency?

Dependent blobs

Of course, we could just become dependent blobs fed, watered, and entertained by the omniscient AI. It raises the spectre of a world like the one EM Forster imagined in The Machine Stops where humans live a soulless existence micro-managed by a grand worldwide artificial intelligence. We may well get that future if we uncritically keep treading the path were treading, Christian says.

And weve certainly been uncritical until now. In terms of regulation, largely speaking weve done next to nothing, according to Christian. There is some general data regulation but nothing substantive to rein in any potential risks from AI. In just a few years, weve entered a world where AI can kill pedestrians, rant on Facebook, decide your credit rating, and racially profile job candidates. As Christian points out: If we dont manage AI properly, weve a pretty good idea where we go because were there. We havent managed it properly. We really are living through the scenario of The Monkeys Paw. So what is Christians vision of the future? Rather than some Terminator-style catastrophe where the Earth is reduced to smouldering rubble, he says, imagine a world which is like a Kafkaesque bureaucracy that nobody really understands or feels theyve control over. Its a world where the AI system determines everything for you whether you get this job, or that house. A bit like the Little Britain sketch Computer Says No except theres no human at the keyboard, just the AI.

Self-destruction

Until now, Christian says, humans have managed to escape the worst effects of our own stupidity because were incompetent. The only reason theres any tuna left in the ocean is because we didnt have enough boats to fish them all, he says. AI solves the competence part but not the wisdom. That could be a recipe for self-destruction. Imagine if a century ago AI had helped us extract all the coal from the ground and burn it?

Obviously, science can never be reversed, and even if we wanted to ban advances in AI it would be impossible. You can stop people developing nuclear weapons by preventing access to enriched plutonium, says Christian, but all it takes to do AI is a computer off the shelf. What do you regulate?

Singularity

Perhaps we need to change what it means to work in the computer industry. If you train to become a civil engineer, Christian says, you dont take a course called bridge safety thats what it means to be a civil engineer. Safety is intrinsic to the notion of what youre doing in your field. AI needs something like that. With universities increasingly including ethnics courses for computer engineers, Christian says he hopes to see a professional licence for programmers within a decade.

Its like were founding a new nation, he says, and we need to figure out what we stand for AI is an experimental Wild West that we need to get serious about. If its a wild west then we need to invent a sheriff. But what if it all spins out of control before we do get serious, before we hire the sheriff? What if the so-called singularity arrives first the moment when a machine reaches the same level of intelligence as a human, and then gets smarter and smarter, bigger and bigger? Is that possible?

Christian says there are three schools of thought: the sceptics, who say its merely a distant possibility; the hard take-off people who say this isnt a drill, and one day soon boom, itll take off like a rocket and suddenly overnight weve a new world order; and the soft take-off people like him. Human-level AI is essentially inevitable, he says. Were well on the way. The world isnt going to change overnight, with governments suddenly subjugated by a super-computer. I think were like the frog boiling one degree at a time. We dont realise we need to jump out of the pot.

Read this article:

Neil Mackay's Big Read: Why artificial intelligence will either be the saviour or exterminator of the human race - HeraldScotland

Related Posts