7 Risks Of Artificial Intelligence You Should Know | Built In

Last March, at the South by Southwest tech conference in Austin, Texas, Tesla and SpaceXfounder Elon Musk issued a friendly warning: Mark my words, he said, billionaire casualin a furry-collared bomber jacket and days old scruff, AIis far more dangerous than nukes.

No shrinking violet, especially when it comes to opining about technology, the outspoken Musk has repeated a version of these artificial intelligence premonitions in other settings as well.

I am really quite close to the cutting edge in AI, and it scares the hell out of me, he told his SXSW audience. Its capable of vastly more than almost anyone knows, and the rate of improvement is exponential.

Musk, though, is far from alone in his exceedingly skeptical (some might say bleakly alarmist) views. A year prior, the late physicist Stephen Hawking was similarly forthright when he told an audience in Portugal that AIs impact could be cataclysmic unless its rapid development is strictly and ethically controlled.

Unless we learn how to prepare for, and avoid, the potential risks, he explained, AI could be the worst event in the history of our civilization.

Considering the number and scope of unfathomably horrible events in world history, thats really saying something.

And in case we havent driven home the point quite firmly enough, research fellow Stuart Armstrong from the Future of Life Institute has spoken of AI as an extinction risk were it to go rogue. Even nuclear war, he said, is on a different level destruction-wise because it would kill only a relatively small proportion of the planet. Ditto pandemics, even at their more virulent.

If AI went bad, and 95 percent of humans were killed, he said, then the remaining five percent would be extinguished soon after. So despite its uncertainty, it has certain features of very bad risks.

How, exactly, would AI arrive at such a perilous point? Cognitive scientist and author Gary Marcus offered some details in an illuminating 2013 New Yorker essay. The smarter machines become, he wrote, themore their goals could shift.

Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called technological singularity or intelligence explosion, the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.

As AI grows more sophisticated and ubiquitous, the voices warning against its current and future pitfalls grow louder. Whether it's the increasing automation of certain jobs, gender and racial bias issues stemming from outdated information sources orautonomous weapons that operate without human oversight (to name just a few), unease abounds on a number of fronts. And were still in the very early stages.

The tech community has long-debated the threats posed by artificial intelligence. Automation of jobs, the spread of fake news and a dangerous arms race of AI-powered weaponry have been proposed as a few of the biggest dangers posed by AI.

Destructive superintelligence aka artificial general intelligence thats created by humans and escapes our control to wreak havoc is in a category of its own. Its also something that might or might not come to fruition (theories vary), so at this point its less risk than hypothetical threat and ever-looming source of existential dread.

Job automation is generally viewed as the most immediate concern. Its no longer a matter of if AI will replace certain types of jobs, but to what degree. In many industries particularly but not exclusively those whose workers perform predictable and repetitive tasks disruptioniswell underway. According to a 2019 Brookings Institution study, 36 million people work in jobs with high exposure to automation, meaning that before long at least 70 percent of their tasks ranging from retail sales and market analysis to hospitality and warehouse labor will be done using AI. An even newer Brookings report concludes that white collar jobs may actuallybe most at risk. And per a 2018 report from McKinsey & Company, the African American workforce will be hardest hit.

The reason we have a low unemployment rate, which doesnt actually capture people that arent looking for work, is largely that lower-wage service sector jobs have been pretty robustly created by this economy, renowned futurist Martin Ford (left) told Built In. I dont think thats going to continue.

As AI robots become smarter and more dextrous, he added, the same tasks will require fewer humans. And while its true that AI will create jobs, an unspecified number of which remain undefined, many will be inaccessible to less educationally advanced members of the displaced workforce.

If youre flipping burgers at McDonalds and more automation comes in, is one of these new jobs going to be a good match for you? Ford said. Or is it likely that the new job requires lots of education or training or maybe even intrinsic talents really strong interpersonal skills or creativity that you might not have? Because those are the things that, at least so far, computers are not very good at.

John C. Havens, author of Heartificial Intelligence: Embracing Humanity andMaximizing Machines, calls bull on the theory that AI will create as many or more jobs than it replaces.

About four years ago, Havens said, he interviewed the head of a law firm about machine learning. The man wanted to hire more people, but he was also obliged to achieve a certain level of returns for his shareholders. A $200,000 piece of software, he discovered, could take the place of ten people drawing salaries of $100,000 each. That meant hed save $800,000. The software would also increase productivity by 70 percent and eradicate roughly 95 percent of errors. From a purely shareholder-centric, single bottom-line perspective, Havens said, there is no legal reason that he shouldnt fire all the humans. Would he feel bad about it? Of course. But thats beside the point.

Even professions that require graduate degrees and additional post-college training arent immune to AI displacement. In fact, technology strategist Chris Messina said, some of them may well be decimated. AI already is having a significant impact on medicine. Law and accounting are next, Messina said, the former being poised for a massive shakeup.

Think about the complexity of contracts, and really diving in and understanding what it takes to create a perfect deal structure, he said. It's a lot of attorneys reading through a lot of information hundreds or thousands of pages of data and documents. Its really easy to miss things. So AI that has the ability to comb through and comprehensively deliver the best possible contract for the outcome you're trying to achieve is probably going to replace a lot of corporate attorneys.

Accountants should also prepare for a big shift, Messina warned. Once AI is able to quickly comb through reams of data to make automatic decisions based on computational interpretations, human auditors may well be unnecessary.

While job loss is currently the most pressing issue related to AI disruption, its merely one among many potential risks. In a February 2018 paper titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, 26 researchers from 14 institutions (academic, civil and industry) enumerated a host of other dangers that could cause serious harm or, at minimum, sow minor chaos in less than five years.

Malicious use of AI, they wrote in their 100-page report, could threaten digital security (e.g. through criminals training machines to hack or socially engineer victims at human or superhuman levels of performance), physical security (e.g. non-state actors weaponizing consumer drones), and political security (e.g. through privacy-eliminating surveillance, profiling, and repression, or through automated and targeted disinformation campaigns).

In addition to its more existential threat, Ford is focused on the way AI will adversely affect privacy and security. A prime example, he said, is Chinas Orwellian use of facial recognition technology in offices, schools and other venues. But thats just one country. A whole ecosphere of companies specialize in similar tech and sell it around the world.

What we can so far only guess at is whether thattech will ever become normalized.As with the internet, where we blithely sacrifice our digital data at the altar of convenience, will round-the-clock, AI-analyzed monitoring someday seem like a fair trade-off for increased safety and security despite its nefarious exploitation by bad actors?

Authoritarian regimes use or are going to use it, Ford said. The question is, How much does it invade Western countries, democracies, and what constraints do we put on it?

AI will also give rise to hyper-real-seeming social media personalities that are very difficult to differentiate from real ones, Ford said. Deployed cheaply and at scale on Twitter, Facebook or Instagram, they could conceivably influence an election.

The same goes for so-called audio and video deepfakes created by manipulating voices and likenesses. The latter is already making waves. But the former, Ford thinks, will prove immensely troublesome. Using machine learning, a subset of AI thats involved in natural language processing, an audio clip of any given politician could be manipulated to make it seem as if that person spouted racist or sexist views when in fact they uttered nothing of the sort. If the clips quality is high enough so as to fool the general public and avoid detection, Ford added, it could completely derail a political campaign.

And all it takes is one success.

From that point on, he noted, no one knows whats real and whats not. So it really leads to a situation where you literally cannot believe your own eyes and ears; you can't rely on what, historically, weve considered to be the best possible evidence Thats going to be a huge issue.

Lawmakers, though frequently less than tech-savvy, are acutely aware and pressing for solutions.

Widening socioeconomicinequality sparked by AI-driven job loss is another cause for concern. Along with education, work has long been a driver of social mobility. However, when its a certain kind of work the predictable, repetitive kind thats prone to AI takeover research has shown that those who find themselves out in the cold are much less apt to get or seek retraining compared to those in higher-level positions who have more money. (Then again, not everyone believes that.)

Various forms of AI bias are detrimental, too. Speaking recently to the New York Times, Princeton computer science professor Olga Russakovsky said it goes well beyond gender and race. In addition to data and algorithmic bias (the latter of which can amplify the former), AI is developed by humans and humans are inherently biased.

A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities, Russakovsky said. Were a fairly homogeneous population, so its a challenge to think broadly about world issues.

In the same article, Google researcher Timnit Gebru said the root of bias is social rather than technological, and called scientists like herself some of the most dangerous people in the world, because we have this illusion of objectivity. The scientific field, she noted, has to be situated in trying to understand the social dynamics of the world, because most of the radical change happens at the social level.

And technologists arent alone in sounding the alarm about AIs potential socio-economic pitfalls. Along with journalists and political figures, Pope Francis is also speaking up and hes not just whistling Sanctus. At a late-September Vatican meeting titled, The Common Good in the Digital Age, Francis warned that AI has the ability to circulate tendentious opinions and false data that could poison public debates and even manipulate the opinions of millions of people, to the point of endangering the very institutions that guarantee peaceful civil coexistence.

If mankinds so-called technological progress were to become an enemy of the common good, he added, this would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest.

A big part of the problem, Messina said, is the private sectors pursuit of profit above all else. Because thats what theyre supposed to do, he said. And so theyre not thinking of, Whats the best thing here? Whats going to have the best possible outcome?

The mentality is, If we can do it, we should try it; lets see what happens, he added. And if we can make money off it, well do a whole bunch of it. But thats not unique to technology. Thats been happening forever.

Not everyone agrees with Musk that AI is more dangerous than nukes, including Ford. But what if AI decides to launch nukes or, say, biological weapons sans human intervention? Or, whatif an enemy manipulates data to return AI-guided missiles whence they came? Both are possibilities. And both would be disastrous. The more than 30,000 AI/robotics researchers and others who signed an open letter on the subject in 2015 certainly think so.

The key question for humanity today is whether to start a global AI arms race or to prevent it from starting, they wrote. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

(The U.S. Militarys proposed budget for 2020 is $718 billion. Of that amount, nearly $1 billion would support AI and machine learning for things like logistics, intelligence analysis and, yes, weaponry.)

Earlier this year, a story in Vox detailed a frightening scenario involving the development of a sophisticated AI system with the goal of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its calculation if it uses all the worlds computing hardware, and it realizes that releasing a biological superweapon to wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates the number with higher confidence.

Thats jarring, sure. But rest easy. In 2012 the Obama Administrations Department of Defense issued a directive regarding Autonomy in Weapon Systems that included this line: Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.

And in early November of this year, a Pentagon group called the Defense Innovation Board published ethical guidelines regarding the design and deployment of AI-enabled weapons. According to the Washington Post, however, the boards recommendations are in no way legally binding. It now falls to the Pentagon to determine how and whether to proceed with them.

Well, thats a relief. Or not.

Have you ever considered that algorithms could bring down our entire financial system? Thats right, Wall Street. You might want to take notice. Algorithmic trading could be responsible for our next major financial crisis in the markets.

What is algorithmic trading? This type of trading occurs when a computer, unencumbered by the instincts or emotions that could cloud a humans judgement, execute trades based off of pre-programmed instructions. These computers can make extremely high-volume, high-frequency and high-value trades that can lead to big losses and extreme market volatility. Algorithmic High-Frequency Trading (HFT) is proving to be a huge risk factor in our markets. HFT is essentially when a computer places thousands of trades at blistering speeds with the goal of selling a few seconds later for small profits. Thousands of these trades every second can equal a pretty hefty chunk of change. The issue with HFT is that it doesnt take into account how interconnected the markets are or the fact that human emotion and logic still play a massive role in our markets.

A sell-off of millions of shares in the airline market could potentially scare humans into selling off their shares in the hotel industry, which in turn could snowball people into selling off their shares in other travel-related companies, which could then affect logistics companies, food supply companies, etc.

Take the Flash Crash of May 2010 as an example. Towards the end of the trading day, the Dow Jones plunged 1,000 points (more than $1 trillion in value) before rebounding towards normal levels just 36 minutes later. What caused this crash? A London-based trader named Navinder Singh Sarao first caused the crash and then it became exacerbated by HFT computers. Apparently Sarao used a spoofing algorithm that placed an order for thousands of stock index futures contracts betting that the market would fall. Instead of going through with the bet, Sarao was going to cancel the order at the last second and buy the lower priced stocks that were being sold off due to his original bet. Other humans and HFT computers saw this $200 million bet and took it as a sign that the market was going to tank. In turn, HFT computers began one of the biggest stock sell-offs in history, causing a brief loss of more than $1 trillion globally.

Financial HFT algorithms arent always correct, either. We view computers as the end-all-be-all when it comes to being correct, but AI is still really just as smart as the humans who programmed it. In 2012, Knight Capital Group experienced a glitch that put them on the verge of bankruptcy. Knights computers mistakenly streamed thousands of orders per second into the NYSE market causing mass chaos for the company. The HFT algorithms executed an astounding 4 million trades of 397 million shares in only 45 minutes. The volatility created by this computer error led to Knight losing $460 million overnight and having to be acquired by another firm. Errant algorithms obviously have massive implications for shareholders and the markets themselves, and nobody learned this lesson harder than Knight.

Many believe the only way to prevent or at least temper the most malicious AI from wreaking havoc is some sort of regulation.

I am not normally an advocate of regulation and oversight I think one should generally err on the side of minimizing those things but this is a case where you have a very serious danger to the public, Musk said at SXSW.

It needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely. This is extremely important.

Ford agrees with a caveat. Regulation of AI implementation is fine, he said, but not of the research itself.

You regulate the way AI is used, he said, but you dont hold back progress in basic technology. I think that would be wrong-headed and potentially dangerous.

Because any country that lags in AI development is at a distinct disadvantage militarily, socially and economically. The solution, Ford continued, is selective application:

We decide where we want AI and where we dont; where its acceptable and where its not. And different countries are going to make different choices. So China might have it everywhere, but that doesnt mean we can afford to fall behind them in the state-of-the-art.

Speaking about autonomous weapons at Princeton University in October, American General John R. Allen emphasized the need for a robust international conversation that can embrace what this technology is. If necessary, he went on, there should also be a conversation about how best to control it, be that a treaty that fully bans AI weapons or one that permits only certain applications of the technology.

For Havens, safer AI starts and ends with humans. His chief focus, upon which he expounds in his 2016 book, is this: How will machines know what we value if we dont know ourselves? In creating AI tools, he said, its vitally important to honor end-user values with a human-centric focus rather than fixating on short-term gains.

Technology has been capable of helping us with tasks since humanity began, Havens wrote in Heartificial Intelligence. But as a race weve never faced the strong possibility that machines may become smarter than we are or be imbued with consciousness. This technological pinnacle is an important distinction to recognize, both to elevate the quest to honor humanity and to best define how AI can evolve it. Thats why we need to be aware of which tasks we want to train machines to do in an informed manner. This involved individual as well as societal choice.

AI researchers Fei-Fei Li and John Etchemendy, of Stanford Universitys Institute for Human-Centered Artificial Intelligence, feel likewise. In a recent blog post, they proposed involving numerous people in an array of fields to make sure AI fulfills its huge potential and strengthens society instead of weakening it:

Our future depends on the ability of social- and computer scientists to work side-by-side with people from multiple backgrounds a significant shift from todays computer science-centric model, they wrote. The creators of AI must seek the insights, experiences and concerns of people across ethnicities, genders, cultures and socio-economic groups, as well as those from other fields, such as economics, law, medicine, philosophy, history, sociology, communications, human-computer-interaction, psychology, and Science and Technology Studies (STS). This collaboration should run throughout an applications lifecycle from the earliest stages of inception through to market introduction and as its usage scales.

Messina is somewhat idealistic about what should happen to help avoid AI chaos, though hes skeptical that it will actually come to pass. Government regulation, he said, isnt a given especially in light of failures on that front in the social media sphere, whose technological complexities pale in comparison to those of AI. It will take a very strong effort on the part of major tech companies to slow progress in the name of greater sustainability and fewer unintended consequences especially massively damaging ones.

At the moment, he said, I dont think the onus is there for that to happen.

As Messina sees things, its going to take some sort of catalyst to arrive at that point. More specifically, a catastrophic catalyst like war or economic collapse. Though whether such an event will prove big enough to actually effect meaningful long-term change is probably open for debate.

For his part, Ford remains a long-run optimist despite being very un-bullish on AI.

I think we can talk about all these risks, and theyre very real, but AI is also going to be the most important tool in our toolbox for solving the biggest challenges we face, including climate change.

When it comes to the near term, however, his doubts are more pronounced.

We really need to be smarter, he said. Over the next decade or two, I do worry about these challenges and our ability to adapt to them.

See the original post:
7 Risks Of Artificial Intelligence You Should Know | Built In

Related Posts
This entry was posted in $1$s. Bookmark the permalink.