"The deployment of full artificial intelligence could well mean the end of the human race." - Stephen Hawking
"He can know his heart, but he don't want to. Rightly so. Best not to look in there. It ain't the heart of a creature that is bound in the way that God has set for it. You can find meanness in the least of creatures, but when God made man the devil was at his elbow. A creature that can do anything. Make a machine. And a machine to make the machine. And evil that can run itself a thousand years, no need to tend it." - Cormac McCarthy, Blood Meridian: Or the Evening Redness in the West
Let me declare at the outset that this article has been tough to write. I am by birthright an American, an optimist and a true believer in our innovative genius and its power to drive better lives for us and the world around us. Ive grown up in the mellow sunshine of Moores law, and lived first hand in a world of unfettered innovation and creativity. That is why it is so difficult to write the following sentence:
Its time for federal regulation of AI and IoT technologies.
I say that reluctantly but with growing certainty. I have come to believe that we share a moral obligation to act now in order to protect our children and grandchildren. We need to take this moment, wake up, and listen to the voices that are warning us that the confluence of technologies that power the AI revolution are advancing so rapidly that they pose a clear and present danger to our lives and well-being.
So this article is about why I have come to feel that way and why I think you should join me in that feeling. Obviously, this has financial implications. Since you are a tech investor, you almost certainly invested in one or more of the companies - like Nvidia (NASDAQ:NVDA), Google (NASDAQ:GOOG) (NASDAQ:GOOGL), and Baidu (NASDAQ:BIDU) - that are profiting from driving the breakneck advances we are seeing in AI base technologies and the myriad of embedded use cases that make the technology so seductive. Indeed, if we look at the entire tech industry ecosystem, from chips through applications and beyond them to their customers that are transforming their business through their use, we can hardly ignore the implications of this present circumstance.
So why? How did we get to this moment? Like me, youve probably been aware of the warnings of well-known luminaries like Elon Musk, Bill Gates, Stephen Hawking and many others, and, like me, you have probably noted their commentary but moved on to consider the next investment opportunity. Personally, being the optimist that I am, I certainly respected those arguments but believed even more strongly that we would innovate ourselves out of the danger zone. So why the change? Two words - one name - Bruce Schneier.
If you have been interested in the fields of cryptology and computer security, you have no doubt heard his name. Now with IBM (NYSE:IBM) as its chief spokesperson on security, he is a noted author and contributor to current thinking on the entire gamut of issues that confront us in this new era of the cloud, IoT, and Internet-based threats to personal privacy and computer system integrity. Mr. Schneiers seminal talk at the recent RSA conference brought it all into focus for me, and I encourage you to watch it. I will briefly recap his argument and then work out some of the consequences that flow from Schneiers argument. So here goes.
Schneiers case begins by identifying the problem - the rise of the cyber-physical system. He points how our day-to-day reality is being subverted as IoT literally stands the world on its head, dematerializing and virtualizing our physical environment. What used to be dumb is now smart. Things that used to be discrete and disconnected are now networked and interconnected in subtle and powerful ways. This is the conceptual linkage that really connected the dots for me. As he puts it in his security blog:
We're building a world-size robot, and we don't even realize it. [] The world-size robot is distributed. It doesn't have a singular body, and parts of it are controlled in different ways by different people. It doesn't have a central brain, and it has nothing even remotely resembling a consciousness. It doesn't have a single goal or focus. It's not even something we deliberately designed. It's something we have inadvertently built out of the everyday objects we live with and take for granted. It is the extension of our computers and networks into the real world. This world-size robot is actually more than the Internet of Things. [] And while it's still not very smart, it'll get smarter. It'll get more powerful and more capable through all the interconnections we're building. It'll also get much more dangerous.
More powerful, indeed. It is at this point where AI and related technologies enter the equation to build a host of managers, agents, bots, natural language interfaces, and other facilities that allow us to leverage the immense scale and reach of our IoT devices - devices that, summed altogether, encompass our physical world and exert enormous power for good and in the wrong hands for evil.
Surely, we can manage this? Well, no, says Schneier - not the way we are going about it now. The problem is, as he cogently points out, our business model for building software and systems is notoriously callus when it comes to security. Our "fail fast fix fast", minimum-market-requirementsfor-version1-shipment protocol is famous for delivering product that comes with a hack me first invitation that is all too often accepted. So whats the difference you may ask? Weve been muddling along with this problem for years. We dig ourselves into trouble, we dig ourselves out. Fail fast, fix fast. Life goes on. Lets go make some money.
Or maybe it doesnt. The IoT phenomenon is leading us headlong into deployment of literally billions of sensors embedded deep into our most personal physical surroundings, connecting us to system entities and actors, nefarious and benign, that now have access to intimate data about our lives. Bad as that is, its not the worst thing. This same access gives these bad actors the potential to control the machines that provide life-sustaining services to us. Its one thing to have your credit card data hacked, its entirely another thing to have a bad actor in control of, say, the power grid, an operating theater robot, your car, or the engine of the airplane you're riding in. Our very lives depend on the integrity of these machines. Do we need to emphasize this point? Fail fast, fix fast does not belong in this world.
So if the prospect of a body-count stat on the next after-action report from some future hack doesnt alarm you, how about this scenario. What if it wasnt a hack? What if it was an unforeseen interaction of otherwise benign AIs that we are relying on to run the system in question? Can we be sure to fully understand the entire capability of an AI that is, say, balancing the second-to-second demands of the power grid?
One thing we can count on - the AI that we are building now will be smarter and more capable tomorrow. How smart is the AI were building? How good is it? Scary good. So lets let Musk answer the question. How smart are these machines were building? [Theyll be] smarter than us. Theyll do everything better than us", he says. So whats the problem? Youre not going to like the answer.
We wont know that the AI has a problem until the AI breaks and we may not know why it broke then. The intrinsic nature of the cognitive software we are building with deep neural nets is that a decision is the product of interactions with thousands and possibly millions of previous decisions from lower levels in the training data, and those decision criteria may well have already been changed as feedback loops communicate learning upstream and down. The system very possibly cant tell us "why". Indeed, the smarter the AI is, the less likely it may be able to answer the why question.
Hard as it is, we really need to understand the scale of the systems we are building. Think about autonomous cars as one, rather small, example. Worldwide the industry has built 88 million cars and light trucks in 2016 and another 26 million medium and heavy trucks. Sometime in the 2025 to 2030 time frame, all of them will be autonomous. With the rise of the driving as a service model, there may not be as many new vehicles being produced, but the numbers will still be huge and fleet sizes will grow every year as the older, self-driving vehicles are replaced. What are the odds that the AI that runs these vehicles performs flawlessly? Can we expect perfection? Our very lives depend on it. God forbid a successful hack into this platform!
Beyond that, what if perfection will kill us? Ultimately, these machines may require our guidance to make moral decisions. Question. You and your spouse are in a car that is in the center lane of the three-lane freeway operating at the 70 mph speed limit. A motorcyclist directly left of you - to the right a family of five in autonomous minivan. Enter a drunk self-driving and old pickup the wrong way at high speed weaving through the three lanes directly in your path. Should your car evade to the left lane and risk the life of the motorcyclist? One would hope our vehicle wouldnt move right and put the family of five at risk. Should it be programmed to conduct a first, do no harm policy which would avoid a swerve into either lane and would simply break as hard as possible in the center lane and hope for the best?
Whatever the scenario, the AIs we develop and deploy, however rich and deep the learning data they have been exposed to, will confront situations that they havent encountered before. In the dire example above and in more mundane conundrums, who ultimately sets the policy that must be adhered to?Should the developer? How about the user (in cases where this is practical)? Or should we have a common policy that must be adhered to by all? For sure, any policy implemented in our driving scenario above will save lives and perform better than any human driver. Even so, in vehicles, airplanes, SCADA systems, chemical plants and myriad other AIs inhabiting devices operating in innately hazardous operating regimes, will it be sufficient to let their in extremis actions be opaque and unknowable? Surely not, but will the AI as developed always gives us the control to change it?
Finally, we must consider a factor that is certainly related to scale but is uniquely and qualitatively different - the network. How freely and ubiquitously should these AIs interconnect? Taken on its face, the decision seems to have been made. The very term, Internet of Things, seems to imply an interconnection policy that is as freewheeling and chaotic as our Internet of people. Is this what We, the People want? Should some AIs - say our nuclear reactors or more generally our SCADA systems - operate with limited or no network connection? Seems likely, but how much further should we go? Who makes the decision?
Beyond such basic questions come the larger issues brought on by the reality of network power. Lets consider the issue of learning and add to that the power of vast network scale in our new cyber physical world. The word seems so simple, so innocuous. How could learning be a bad thing? AI powered IoT systems must be connected to deliver the value we need from them. Our autonomous vehicles, terrestrial and airborne, for example, will be in constant communication with nearby traffic, improving our safety by step-functions.
So how does the fleet learn? Lets take the example from above. Whatever the result, the incident forensics will be sent to the cloud where developers will presumably incorporate the new data in the master learning set. How will the new master be tested? How long? How rigorously? What will be the re-deployment model? Will the new improved version of the AI be proprietary and not shared with the other vehicle manufacturers, leaving their customers at a safety disadvantage? These are questions that demand government purview.
Certainly, there is no unanimous consensus here regarding the threat of AI. Andrew Ng of Baidu/Stanford disagrees that AI will be a threat to us in the foreseeable future. So does Mark Zuckerberg. But these disagreements are only with the overt existential threat - i.e. that a future AI may kill us. More broadly, though, there is very little disagreement that our AI/IoT-powered future poses broad economic and sociopolitical issues that could literally rip our societies apart. What issues? How about the massive loss of jobs and livelihood of perhaps the majority of our population over the course of the next 20 years? As is nicely summarized in this recent NY Times article, AI will almost certainly exacerbate the already difficult problem we have with income disparities. Beyond that, the global consequences of the AI revolution could generate a dangerous dependency dynamic among countries other than the US and China that do not own AI IP.
We could go on and on, but hopefully the issue is clear. Through the development and implementation of increasing capable AI-powered IoT systems, we are embarking upon a voyage into an exciting but dangerous future state which we can barely imagine from our current vantage point. Now is the time to step back and assess where we are and what we need to do going forward. Schneiers prescription for the problem is that the tech industry must get in front of this issue and drive a workable consensus among industry stakeholders and governmental authorities and regulatory bodies about the problem, its causes and potential effects, and most importantly, a reasonable solution to the problems that protects the public while allowing the industry room to innovate and build.
There is no turning back, but we owe it to ourselves and our posterity to do our utmost to get it right. As technologists we are inherently self-interested in protecting and nurturing the opportunity we all have in this exciting new realm. This is natural and understandable. Our singular focus on agility and innovation has brought the world many benefits and will bring many more. But we are not alone and it would be completely irresponsible to insist that we are the only stakeholder in the outcomes we are engineering.
This decision - to engage and attempt to manage the design of the new and evolving regulatory regime - has enormous implications. There is undoubtedly risk. Poor or heavy-handed regulation could well exact a tremendous opportunity cost. One could well imagine a world in which Nvidia's GPU business is severely affected by regulatory inspection and delay, for example. But that is the very reason we need to engage now. The economic leverage that AI provides in every sector of our economy leads us inescapably to economic and wealth-building scenarios beyond anything the world has seen before. As participants and investors, we must do what we can to protect this opportunity to build unprecedented levels of wealth for our country and ourselves. Schneier argues that we are best serving our self-interest by engaging government now rather than burying our heads in the sand waiting for the inevitable backlash that will come when (not if!) these massive systems fail catastrophically in the future.
Schneier has got the right idea. We need to broaden the conversation, lead the search for solutions, and communicate the message to the many non-tech constituencies - including all levels of government - that there is an exciting future ahead but that future must include appropriate regulations that protect the American people and indeed the entire human race.
We wont get a second chance to get this right.
Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.
I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.
See the original post here:
The 'Skynet' Gambit - AI At The Brink - Seeking Alpha
- Classic reasoning systems like Loom and PowerLoom vs. more modern systems based on probalistic networks [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Using Amazon's cloud service for computationally expensive calculations [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Software environments for working on AI projects [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- New version of my NLP toolkit [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Semantic Web: through the back door with HTML and CSS [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Java FastTag part of speech tagger is now released under the LGPL [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Defining AI and Knowledge Engineering [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Great Overview of Knowledge Representation [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Something like Google page rank for semantic web URIs [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My experiences writing AI software for vehicle control in games and virtual reality systems [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- The URL for this blog has changed [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- I have a new page on Knowledge Management [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- N-GRAM analysis using Ruby [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Good video: Knowledge Representation and the Semantic Web [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Using the PowerLoom reasoning system with JRuby [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Machines Like Us [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- RapidMiner machine learning, data mining, and visualization tool [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- texai.org [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- NLTK: The Natural Language Toolkit [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My OpenCalais Ruby client library [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Ruby API for accessing Freebase/Metaweb structured data [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Protégé OWL Ontology Editor [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- New version of Numenta software is available [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Very nice: Elsevier IJCAI AI Journal articles now available for free as PDFs [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Verison 2.0 of OpenCyc is available [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- What’s Your Biggest Question about Artificial Intelligence? [Article] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Minimax Search [Knowledge] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Decision Tree [Knowledge] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- More AI Content & Format Preference Poll [Article] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- New Planners Solve Rescue Missions [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Neural Network Learns to Bluff at Poker [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Pushing the Limits of Game AI Technology [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Mining Data for the Netflix Prize [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Interview with Peter Denning on the Principles of Computing [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Decision Making for Medical Support [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Neural Network Creates Music CD [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- jKilavuz - a guide in the polygon soup [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Artificial General Intelligence: Now Is the Time [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Apply AI 2007 Roundtable Report [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- What Would You do With 80 Cores? [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Software Finds Learning Language Child's Play [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Artificial Intelligence in Games [Article] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Artificial Intelligence Resources [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Alan Turing: Mathematical Biologist? [Last Updated On: April 25th, 2012] [Originally Added On: April 25th, 2012]
- BBC Horizon: The Hunt for AI ( Artificial Intelligence ) - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Can computers have true artificial intelligence" Masonic handshake" 3rd-April-2012 - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Kevin B. Korb - Interview - Artificial Intelligence and the Singularity p3 - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Artificial Intelligence - 6 Month Anniversary - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Science Breakthroughs [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Hitman: Blood Money - Part 49 - Stupid Artificial Intelligence! - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Research Members Turned Off By HAARP Artificial Intelligence - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Artificial Intelligence Lecture No. 5 - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- The Artificial Intelligence Laboratory, 2012 - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Charlie Rose - Artificial Intelligence - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Expert on artificial intelligence to speak at EPIIC Nights dinner [Last Updated On: May 4th, 2012] [Originally Added On: May 4th, 2012]
- Filipino software engineers complete and best thousands on Stanford’s Artificial Intelligence Course [Last Updated On: May 4th, 2012] [Originally Added On: May 4th, 2012]
- Vodafone xone™ Hackathon Challenges Developers and Entrepreneurs to Build a New Generation of Artificial Intelligence ... [Last Updated On: May 4th, 2012] [Originally Added On: May 4th, 2012]
- Rocket Fuel Packages Up CPG Booster [Last Updated On: May 4th, 2012] [Originally Added On: May 4th, 2012]
- 2 Filipinos finishes among top in Stanford’s Artificial Intelligence course [Last Updated On: May 5th, 2012] [Originally Added On: May 5th, 2012]
- Why Your Brain Isn't A Computer [Last Updated On: May 5th, 2012] [Originally Added On: May 5th, 2012]
- 2 Pinoy software engineers complete Stanford's AI course [Last Updated On: May 7th, 2012] [Originally Added On: May 7th, 2012]
- Percipio Media, LLC Proudly Accepts Partnership With MIT's Prestigious Computer Science And Artificial Intelligence ... [Last Updated On: May 10th, 2012] [Originally Added On: May 10th, 2012]
- Google Driverless Car Ok'd by Nevada [Last Updated On: May 10th, 2012] [Originally Added On: May 10th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel and Forrester Research Announce Free Webinar [Last Updated On: May 10th, 2012] [Originally Added On: May 10th, 2012]
- Rocket Fuel Wins 2012 San Francisco Business Times Tech & Innovation Award [Last Updated On: May 13th, 2012] [Originally Added On: May 13th, 2012]
- Internet Week 2012: Rocket Fuel to Speak at OMMA RTB [Last Updated On: May 16th, 2012] [Originally Added On: May 16th, 2012]
- How to Get the Most Out of Your Facebook Ads -- Rocket Fuel's VP of Products, Eshwar Belani, to Lead MarketingProfs ... [Last Updated On: May 16th, 2012] [Originally Added On: May 16th, 2012]
- The Digital Disruptor To Banking Has Just Gone International [Last Updated On: May 16th, 2012] [Originally Added On: May 16th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel Announce Free Webinar Featuring an Independent Research Firm [Last Updated On: May 23rd, 2012] [Originally Added On: May 23rd, 2012]
- MASA Showcases Latest Version of MASA SWORD for Homeland Security Markets [Last Updated On: May 23rd, 2012] [Originally Added On: May 23rd, 2012]
- Bluesky Launches Drones for Aerial Surveying [Last Updated On: May 23rd, 2012] [Originally Added On: May 23rd, 2012]
- Artificial Intelligence: What happened to the hunt for thinking machines? [Last Updated On: May 25th, 2012] [Originally Added On: May 25th, 2012]
- Bubble Robots Move Using Lasers [VIDEO] [Last Updated On: May 25th, 2012] [Originally Added On: May 25th, 2012]
- UHV assistant professors receive $10,000 summer research grants [Last Updated On: May 27th, 2012] [Originally Added On: May 27th, 2012]
- Artificial intelligence: science fiction or simply science? [Last Updated On: May 28th, 2012] [Originally Added On: May 28th, 2012]
- Exetel taps artificial intelligence [Last Updated On: May 29th, 2012] [Originally Added On: May 29th, 2012]
- Software offers brain on the rain [Last Updated On: May 29th, 2012] [Originally Added On: May 29th, 2012]
- New Dean of Science has high hopes for his faculty [Last Updated On: May 30th, 2012] [Originally Added On: May 30th, 2012]
- Cognitive Code Announces "Silvia For Android" App [Last Updated On: May 31st, 2012] [Originally Added On: May 31st, 2012]
- A Rat is Smarter Than Google [Last Updated On: June 5th, 2012] [Originally Added On: June 5th, 2012]