The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Superintelligence
Tech Leaders Raise Concern About the Dangers of AI – iDrop News
Posted: March 1, 2017 at 9:24 pm
In the midst of great strides being made in artificial intelligence, theres a growing group of people who have expressed concern about the potential repercussions of AI technology.
Members of that group include Tesla and SpaceX CEO Elon Musk, theoretical physicist and cosmologist Stephen Hawking, and Microsoft co-founder Bill Gates. I am in the camp that is concerned about super intelligence, Gates in a 2015 Reddit AMA, adding that he doesnt understand why some people are not concerned. Additionally, Gates has even proposed taxing robots that take jobs away from human workers.
Musk, for his part, was a bit more dramatic in painting AI as a potential existential threat to humanity: We need to be super careful with AI. Potentially more dangerous than nukes, Musk tweeted, adding that Superintelligence: Paths, Dangers, Strategies by philosopher Nick Bostrom was worth reading.
Hawking was similarly foreboding in an interview with the BBC, stating that he thinks the development of full artificial intelligence could spell the end of the human race. Specifically, he said that advanced AI could take become self-reliant and redesign itself at an ever-increasing rate. Human beings, limited by biological evolution wouldnt be able to keep up, he added.
Indeed, advances in artificial intelligence once seen as something purely in the realm of science fiction is more of an inevitability than a possibility now. Tech companies everywhere are seemingly in a race to development more advanced artificial intelligence and machine learning systems. Apple, for example, is reportedly doubling-down on its Seattle-based AI research hub, and also recently joined the Partnership on AI, a research group dominated by other tech giants such as Amazon, Facebook and Google.
Like every advance in technology, AI has the potential to make amazing things possible and our lives easier. But ever since humanity first began exploring the concept of advanced machine learning, the idea has also been closely linked to the trope of AI being a potential threat or menace. SkyNet from the Terminator series comes to mind. Even less apocalyptic fiction, like 2001: A Space Odyssey, paints AI as something potentially dangerous.
As Forbes contributor R.L. Adams writes, theres little that could be done to stop a malevolent AI once its unleashed. True autonomy, as he points out, is like free will and someone, man or machine, will eventually have to determine right from wrong. Perhaps even more worryingly, Adams also brings up the fact that AI could even be weaponized to wreak untold havoc.
But even without resorting to fear-mongering, it might be smart to at least be concerned. If some of the greatest minds in tech are worried about AIs potential as a threat, then why arent the rest of us? The development of advanced artificial intelligences definitely brings about some complicated moral and philosophical issues, even beyond humanitys eventual end. In any case, whether or not AI will cause humankinds extinction, it doesnt seem likely that humanitys endeavors in the area will slow down anytime soon.
Read more:
Tech Leaders Raise Concern About the Dangers of AI - iDrop News
Posted in Superintelligence
Comments Off on Tech Leaders Raise Concern About the Dangers of AI – iDrop News
Superintelligent AI explains Softbank’s push to raise a $100BN Vision Fund – TechCrunch
Posted: February 28, 2017 at 8:17 pm
Anyone whos seen Softbank CEO Masayoshi Son give a keynote speech will know he rarely sticks to the standardindustry conference playbook.
And his turn on the stage at Mobile World Congress this morning was no different, with Son making likeEldon Tyrell and telling delegates abouthis personal belief ina looming computing Singularity that hes convinced willseesuperintelligent robotsarriving en masse within the next 30 years, surpassing the human population in number and brainpower.
I totally believe this concept, he said, of the Singularity. In next 30 years this will become a reality.
If superintelligence goes inside the moving device then the world, our lifestyle dramatically changes, he continued, pointing out thatautonomous vehicles containingasuperintelligent AI would become smart robots.
There will be many kinds. Flying, swimming, big, micro, run, two legs, four legs, 100 legs, he added, further fleshing out his vision of a robot-infested future.
Son said hispersonal conviction in the looming rise of billions ofsuperintelligent robots bothexplains his acquisition of UK chipmaker ARM last year, and his subsequent plantoestablish the worldsbiggest VC fund.
I truly believe its coming, thats why Im in a hurry to aggregate the cash, to invest, he noted.
Sons intentto raise$100BN for a new fund, called the Softbank Vision Fund, was announced last October, getting early backing fromSaudi Arabias public investment fund as one of the partners.
The fund hassince pulled inadditional contributors includingFoxconn,Apple, Qualcomm and Oracle co-founder Larry Ellisons family office.
But it has evidently not yet hit Sons target of $100BN as he used his MWC keynoteas a sales pitch for additional partners. Im looking for a partner because we alone cannot do it, he told delegates, smiling and opening his arms in a widegesture of appeal. We have to do it quickly and here are all kinds of candidates for my partner.
Son saidhis haste is partly down to a belief that superintelligent AIs can be used for the goodness of humanity, going on to suggest that only AI has the potentialto address some of the greatest threats to humankinds continued existence be itclimate change ornuclear annihilation.
Though he also said its important to consider whether such a technology will be good or bad.
It will be so much more capable than us - what will be our job? What will be our life? We have to ask philosophical questions, he said. Is it good or bad?
I think this superintelligence is going to be our partner.If we misuse it its a risk. If we use it in good spirits it will be our partner for a better life.So the future can be better predicted, people will live healthier, and so on, he added.
Given this vision for billions of superintelligence connected devices fast-coming down the pipe, Son is unsurprisingly very concerned about security. He said he discusses this weekly with ARM engineers. And described how one of his engineers had played a game to see how many security cameras he could hack during a lunchtime while waiting for his wife. The result? 1.2M cameras potentially compromised during an idle half hour or so.
This is how it is dangerous, this is how we should start thinking of protection of ourself, said Son. We have to be very very careful.
We are shipping a lot of ARMchips but in the pastthose were not secure.We are enhancing very quickly the security. We need to secure all of the thingsin our society.
Son alsorisked something of a Gerald Ratnermoment when he said that all the chips ARM is currently shipping for use in connected cars are not , in fact, secure. Going so far as to show a video of a connected car being hacked and the driver being unable to control the brakes or steering.
There are 500 ARM chips [in one car] today and none of them are secure today btw! said Son. (Though clearly hesworking hard with his team at ARM to change that.)
He also discussed a plan tolaunch800satellitesin the next threeyears, positioned in a nearer Earth orbit to reduce latency and support faster connectivity, as part of a planto help plugconnectivity gaps for connected cars describing the planned configuration of satellites as like a cell tower and like fiber coming straight to the Earth from space.
Were going to provide connectivityto billions of drivers fromthesatellites, he said.
For carriers hungry for their next billions of subscribers as smartphone markets saturate across the world, Son painted a pictured of vast subscriber growth via the proliferation of connected objects which handily of course also helps his bottom line, as the new parent of ARM.
If I say number of subscribers will not grow its not true, he told the conference. Smartphones no but IoT chips will grow to a trillionchips so we will have 1TR subscribers in the next 20 years. And they will all be smart.
One of the chips in our shoes in the next 30 years will be smart than our brain. We will be less than our shoes! And we are stepping on them! he joked. Its an interesting society that comes.
All of the cities, social ecosystem infrastructurewill be connected, he added.All those things will be connected.All connected securelyand managed from the cloud.
Here is the original post:
Superintelligent AI explains Softbank's push to raise a $100BN Vision Fund - TechCrunch
Posted in Superintelligence
Comments Off on Superintelligent AI explains Softbank’s push to raise a $100BN Vision Fund – TechCrunch
Building A ‘Collective Superintelligence’ For Doctors And Patients Around The World – Forbes
Posted: at 6:31 am
Forbes | Building A 'Collective Superintelligence' For Doctors And Patients Around The World Forbes One thing about The Human Diagnosis Project -- It's not thinking small. Its goal is to build an open diagnostic system for patients, doctors and caregivers using clinical experience and other information contributed by physicians and researchers around ... |
Originally posted here:
Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes
Posted in Superintelligence
Comments Off on Building A ‘Collective Superintelligence’ For Doctors And Patients Around The World – Forbes
Don’t Fear Superintelligent AICCT News – CCT News
Posted: February 26, 2017 at 11:29 pm
We have all had founded and unfounded fears when we were growing up. On the other hand, more often than not we have been in denial of accepting the limits of our bodies and our minds. According to Grady Booch, the art and science of computing have come a long way into the lives of human beings. There are millions of devices that carry hundreds of pages of data streams.
However, having been a systems engineer Booch points out at a possibility of building a system that can converse with humans in natural language. He further argues that there are systems that can also set goals or better still execute the plans set against those goals.
Booch has been there, done it and experienced it. Every sort of technology will somewhat create apprehension. Take for example when telephones were introduced; there was this feeling that they would destroy all civil conversation. The written words became invasive lest people lost their ability to remember.
However, there is still the artificial intelligence that we ought to think about given that many people will tend to trust it more than a human being. Many are the times that we have forgotten that these systems require substantial training. But how many people will run away from this citing fear that training of systems will threaten humanity?
Booch advises that worrying about the rise of superintelligence is dangerous. What we fail to understand is that the rise of computing brings on the hand increases society issues, which we must attend to. Remember the AIs we build are neither for controlling weather nor directing tides. Hence there is no competition with human economies.
Nonetheless, it is important to experience computing because it will help us advance in our human experiences. Otherwise, it will not be long before AI takes dominion over a human beings brilliant minds.
See the rest here:
Posted in Superintelligence
Comments Off on Don’t Fear Superintelligent AICCT News – CCT News
Elon Musk – 2 Things Humans Need to Do to Have a Good Future – Big Think
Posted: at 11:29 pm
A fascinating conference on artificial intelligence was recently hosted by the Future of Life Institute, an organization aimed at promoting optimistic visions of the future while anticipating existential risks from artificial intelligence and other directions.
The conference Superintelligence: Science or Fiction? featured a panel of Elon Musk from Tesla Motors and SpaceX, futurist Ray Kurzweil, Demis Hassabis of MITs DeepMind, neuroscientist and author Sam Harris, philosopher Nick Bostrom, philosopher and cognitive scientist David Chalmers, Skype co-founder Jaan Tallinn, as well as computer scientists Stuart Russell and Bart Selman. The discussion was led by MIT cosmologist Max Tegmark.
The conference participants offered a number of prognostications and warnings about the coming superintelligence, an artificial intelligence that will far surpass the brightest human.
Most agreed that such an AI (or AGI for Artificial General Intelligence) will come into existence. It is just a matter of when. The predictions ranged from days to years, with Elon Musk saying that one day an AI will reach a a threshold where it's as smart as the smartest most inventive human which it will then surpass in a matter of days, becoming smarter than all of humanity.
Ray Kurzweils view is that however long it takes, AI will be here before we know it:
Every time there is an advance in AI, we dismiss it as 'oh, well that's not really AI:' chess, go, self-driving cars. An AI, as you know, is the field of things we haven't done yet. That will continue when we actually reach AGI. There will be lots of controversy. By the time the controversy settles down, we will realize that it's been around for a few years," says Kurzweil [5:00].
Neuroscientist and author Sam Harris acknowledges that his perspective comes from outside the AI field, but sees that there are valid concerns about how to control AI. He thinks that people dont really take the potential issues with AI seriously yet. Many think its something that is not going to affect them in their lifetime - what he calls the illusion that the time horizon matters.
If you feel that this is 50 or a 100 years away that is totally consoling, but there is an implicit assumption there, the assumption is that you know how long it will take to build this safely. And that 50 or a 100 years is enough time, he says [16:25].
On the other hand, Harris points out that at stake here is how much intelligence humans actually need. If we had more intelligence, would we not be able to solve more of our problems, like cancer? In fact, if AI helped us get rid of diseases, then humanity is currently in pain of not having enough intelligence.
Elon Musks point of view is to be looking for the best possible future - the good future as he calls it. He thinks we are headed either for superintelligence or civilization ending and its up to us to envision the world we want to live in.
We have to figure out, what is a world that we would like to be in where there is this digital superintelligence?, says Musk [at 33:15].
He also brings up an interesting perspective that we are already cyborgs because we utilize machine extensions of ourselves like phones and computers.
Musk expands on his vision of the future by saying it will require two things - solving the machine-brain bandwidth constraint and democratization of AI. If these are achieved, the future will be good according to the SpaceX and Tesla Motors magnate [51:30].
By the bandwidth constraint, he means that as we become more cyborg-like, in order for humans to achieve a true symbiosis with machines, they need a high-bandwidth neural interface to the cortex so that the digital tertiary layer would send and receive information quickly.
At the same time, its important for the AI to be available equally to everyone or a smaller group with such powers could become dictators.
He brings up an illuminating quote about how he sees the future going:
There was a great quote by Lord Acton which is that 'freedom consists of the distribution of power and despotism in its concentration.' And I think as long as we have - as long as AI powers, like anyone can get it if they want it, and we've got something faster than meat sticks to communicate with, then I think the future will be good, says Musk [51:47]
You can see the whole great conversation here:
View post:
Elon Musk - 2 Things Humans Need to Do to Have a Good Future - Big Think
Posted in Superintelligence
Comments Off on Elon Musk – 2 Things Humans Need to Do to Have a Good Future – Big Think
Artificial Intelligence Is Not a ThreatYet – Scientific American
Posted: February 14, 2017 at 11:37 am
In 2014 SpaceX CEO Elon Musk tweeted: Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes. That same year University of Cambridge cosmologist Stephen Hawking told the BBC: The development of full artificial intelligence could spell the end of the human race. Microsoft co-founder Bill Gates also cautioned: I am in the camp that is concerned about super intelligence.
How the AI apocalypse might unfold was outlined by computer scientist Eliezer Yudkowsky in a paper in the 2008 book Global Catastrophic Risks: How likely is it that AI will cross the entire vast gap from amoeba to village idiot, and then stop at the level of human genius? His answer: It would be physically possible to build a brain that computed a million times as fast as a human brain.... If a human mind were thus accelerated, a subjective year of thinking would be accomplished for every 31 physical seconds in the outside world, and a millennium would fly by in eight-and-a-half hours. Yudkowsky thinks that if we don't get on top of this now it will be too late: The AI runs on a different timescale than you do; by the time your neurons finish thinking the words I should do something you have already lost.
The paradigmatic example is University of Oxford philosopher Nick Bostrom's thought experiment of the so-called paperclip maximizer presented in his Superintelligence book: An AI is designed to make paperclips, and after running through its initial supply of raw materials, it utilizes any available atoms that happen to be within its reach, including humans. As he described in a 2003 paper, from there it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities. Before long, the entire universe is made up of paperclips and paperclip makers.
I'm skeptical. First, all such doomsday scenarios involve a long sequence of if-then contingencies, a failure of which at any point would negate the apocalypse. University of West England Bristol professor of electrical engineering Alan Winfield put it this way in a 2014 article: If we succeed in building human equivalent AI and if that AI acquires a full understanding of how it works, and if it then succeeds in improving itself to produce super-intelligent AI, and if that super-AI, accidentally or maliciously, starts to consume resources, and if we fail to pull the plug, then, yes, we may well have a problem. The risk, while not impossible, is improbable.
Second, the development of AI has been much slower than predicted, allowing time to build in checks at each stage. As Google executive chairman Eric Schmidt said in response to Musk and Hawking: Don't you think humans would notice this happening? And don't you think humans would then go about turning these computers off? Google's own DeepMind has developed the concept of an AI off switch, playfully described as a big red button to be pushed in the event of an attempted AI takeover. As Baidu vice president Andrew Ng put it (in a jab at Musk), it would be like worrying about overpopulation on Mars when we have not even set foot on the planet yet.
Third, AI doomsday scenarios are often predicated on a false analogy between natural intelligence and artificial intelligence. As Harvard University experimental psychologist Steven Pinker elucidated in his answer to the 2015 Edge.org Annual Question What Do You Think about Machines That Think?: AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world. It is equally possible, Pinker suggests, that artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization.
Fourth, the implication that computers will want to do something (like convert the world into paperclips) means AI has emotions, but as science writer Michael Chorost notes, the minute an A.I. wants anything, it will live in a universe with rewards and punishmentsincluding punishments from us for behaving badly.
Given the zero percent historical success rate of apocalyptic predictions, coupled with the incrementally gradual development of AI over the decades, we have plenty of time to build in fail-safe systems to prevent any such AI apocalypse.
See the rest here:
Artificial Intelligence Is Not a ThreatYet - Scientific American
Posted in Superintelligence
Comments Off on Artificial Intelligence Is Not a ThreatYet – Scientific American
Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI – Futurism
Posted: at 11:37 am
In Brief
In2012, Michael Vassar became the chief science officer of MetaMed Research, which he co-founded, and prior to that, he served as the president of the Machine Intelligence Research Institute. Clearly, he knows a thing or two about artificial intelligence (AI), and now, he has come out with a stark warning for humanity when it comes to the development of artificial super-intelligence.
In a video posted by Big Think, Vassar states, If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order. Essentially, he is warning that an unchecked AI could eradicate humanity in the future.
Vassars views are based on the writings of Nick Bostrom, most specifically, those found in his book Superintelligence. Bostroms ideas have been around for decades, but they are only now gaining traction given his association with prestigious institutions. Vassar sees this lack of early attention, and not AI itself, as the biggest threat to humanity. He argues that we need to find a way to promote analytically sound discoveries from those who lack the prestige currently necessary for ideas to be heard.
Many tech giants have spoken extensively about their fears regarding the development of AI. Elon Musk believes that an AI attack on the internet is only a matter of time. Meanwhile,Stephen Hawking cites the creation of AI as the best or worst thing to happen to humanity.
Bryan Johnsons company Kernal is currently working on a neuroprosthesis that can mimic, repair, and improve human cognition. If it comes to fruition, that tech could be a solid defense against the worst case scenario of AI going completely rogue. If we are able to upgrade our brains to a level equal to that expected of AI, we may be able to at least stay on par with the machines.
Excerpt from:
Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI - Futurism
Posted in Superintelligence
Comments Off on Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI – Futurism
Simulation hypothesis: The smart person’s guide – TechRepublic
Posted: February 11, 2017 at 8:38 am
Getty Images/iStockphoto
The simulation hypothesis is the idea that reality is a digital simulation. Technological advances will inevitably produce automated artificial superintelligence that will, in turn, create simulations to better understand the universe. This opens the door for the idea that superintelligence already exists and created simulations now occupied by humans. At first blush the notion that reality is pure simulacra seems preposterous, but the hypothesis springs from decades of scientific research and is taken seriously by academics, scientists, and entrepreneurs like Stephen Hawking and Elon Musk.
From Plato's allegory of the cave to The Matrix ideas about simulated reality can be found scattered through history and literature. The modern manifestation of the simulation argument is postulates that, like Moore's Law, over time computing power becomes exponentially more robust. Barring a disaster that resets technological progression, experts speculate that it is inevitable computing capacity will one day be powerful enough to generate realistic simulations.
TechRepublic's smart person's guide is a routinely updated "living" precis loaded with up-to-date information about about how the simulation hypothesis works, who it affects, and why it's important.
SEE: Check out all of TechRepublic's smart person's guides
SEE: Quick glossary: Artificial intelligence (Tech Pro Research)
The simulation hypothesis advances the idea that simulations might be the inevitable outcome of technological evolution. Though ideas about simulated reality are far from new and novel, the contemporary theory springs from research conducted by Oxford University professor of philosophy Nick Bostrom.
In 2003 Bostrom presented a paper that proposed a trilemma, a decision between three challenging options, related to the potential of future superintelligence to develop simulations. Bostrom argues this likelihood is nonzero, meaning the odds of a simulated reality are astronomically small, but because percentage likelihood is not zero we must consider rational possibilities that include a simulated reality. Bostrom does not propose that humans occupy a simulation. Rather, he argues that massive computational ability developed by posthuman superintelligence will likely develop simulations to better understand that nature of reality.
In his book Superintelligence using anthropic rhetoric Bostrom argues that the odds of a population with human-like population advancing to superintelligence is "very close to zero," or (with an emphasis on the word or) the odds that a superintelligence would desire to create simulations is also "very close to zero," or the odds that people with human-like experiences actually live in a simulation is "very close to one." He concludes by arguing that if the claim "very close to one" is the correct answer and most people do live in simulations, then the odds are good that we too exist in a simulation.
Simulation hypothesis has many critics, namely those in academic communities who question an overreliance on anthropic reasoning and scientific detractors who point out simulations need not be conscious to be studied by future superintelligence. But as artificial intelligence and machine learning emerge as powerful business and cultural trends, many of Bostrom's ideas are going mainstream.
Additional resources
SEE: Research: 63% say business will benefit from AI (Tech Pro Research)
It's natural to wonder if the simulation hypothesis has real-world applications, or if it's a fun but purely abstract consideration. For business and culture, the answer is unambiguous: It doesn't matter if we live in a simulation or not. The accelerating pace of automated technology will have a significant impact on business, politics, and culture in the near future.
The simulation hypothesis is coupled inherently with technological evolution and the development of superintelligence. While superintelligence remains speculative, investments in narrow and artificial general intelligence are significant. Using the space race as an analogue, advances in artificial intelligence create technological innovations that build, destroy, and augment industry. IBM is betting big with Watson and anticipates a rapidly emerging $2 trillion market for cognitive products. Cybersecurity experts are investing heavily in AI and automation to fend off malware and hackers. In a 2016 interview with TechRepublic, United Nations chief technology diplomat, Atefeh Riazi, anticipated the economic impact of AI to be profound and referred to the technology as "humanity's final innovation."
Additional resources
SEE: Artificial Intelligence and IT: The good, the bad and the scary (Tech Pro Research)
Though long-term prognostication about the impact of automated technology is ill-advised, in the short term advances in machine learning, automation, and artificial intelligence represent a paradigm shift akin to the development of the internet or the modern mobile phone. In other words, the economy post-automation will be dramatically different. AI will hammer manufacturing industries, and logistics distribution will lean heavily on self-driving cars, ships, drones, and aircraft, and financial services jobs that require pattern recognition will evaporate.
Conversely, automation could create demand for inherently interpersonal skills like HR, sales, manual labor, retail, and creative work. "Digital technologies are in many ways complements, not substitutes for, creativity," Erik Brynjolfsson said, in an interview with TechRepublic. "If somebody comes up with a new song, a video, or piece of software there's no better time in history to be a creative person who wants to reach not just hundreds or thousands, but millions and billions of potential customers."
Additional resources
SEE: IT leader's guide to the future of artificial intelligence (Tech Pro Research)
The golden age of artificial intelligence began in 1956 at the Ivy League research institution Dartmouth College with the now-infamous proclamation, "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it." The conference established AI and computational protocols that defined a generation of research. The conference was preceded and inspired by developments at Manchester College in 1951 that produced a program that could play checkers, and another program that could play chess.
Though excited researchers anticipated the speedy emergence of human-level machine intelligence, programming intelligence unironically proved to be a steep challenge. By the mid-1970s the field entered the so-called "first AI winter." The era was marked by the development of strong theories limited by insufficient computing power.
Spring follows winter, and by the 1980s AI and automation technology grew from the sunshine of faster hardware and the boom of consumer technology markets. By the end of the century parallel processingthe ability to perform multiple computations at one timeemerged. In 1997 IBM's Deep Blue defeated human chess player Gary Kasparov. Last year Google's DeepMind defeated a human at Go, and this year the same technology easily beat four of the best human poker players.
Driven and funded by research and academic institutions, governments, and the private sector these benchmarks indicate a rapidly accelerating automation and machine learning market. Major industries like financial services, healthcare, sports, travel, and transportation are all deeply invested in artificial intelligence. Facebook, Google, and Amazon are using AI innovation for consumer applications, and a number of companies are in a race to build and deploy artificial general intelligence.
Some AI forecasters like Ray Kurzweil predict a future with the human brain cheerly connected to the cloud. Other AI researchers aren't so optimistic. Bostrom and his colleagues in particular warn that creating artificial general intelligence could produce an existential threat.
Among the many terrifying dangers of superintelligenceranging from out-of-control killer robots to economic collapsethe primary threat of AI is the coupling of of anthropomorphism with the misalignment of AI goals. Meaning, humans are likely to imbue intelligent machines with human characteristics like empathy. An intelligent machine, however, might be programed to prioritize goal accomplishment over human needs. In a terrifying scenario known as instrumental convergence, or the "paper clip maximizer," a superintelligent narrowly focused AI designed to produce paper clips would turn humans into gray goo in pursuit of resources.
Additional resources
SEE: Research: Companies lack skills to implement and support AI and machine learning (Tech Pro Research)
It may be impossible to test or experience the simulation hypothesis, but it's easy to learn more about the theory. TechRepublic's Hope Reese enumerated the best books on artificial intelligence, including Bostrom's essential tome Superintelligence, Kurzweil's The Singularity Is Near: When Humans Transcend Biology, and Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat.
Make sure to read TechRepublic's smart person's guides on machine learning, Google's DeepMind, and IBM's Watson. Tech Pro Research provides a quick glossary on AI and research on how companies are using machine learning and big data.
Finally, to have some fun with hands-on simulations, grab a copy of Cities: Skylines, Sim City, Elite:Dangerous, or Planet Coaster on game platform Steam. These small-scale environments will let you experiment with game AI while you build your own simulated reality.
Additional resources
Read more:
Simulation hypothesis: The smart person's guide - TechRepublic
Posted in Superintelligence
Comments Off on Simulation hypothesis: The smart person’s guide – TechRepublic
Game Theory: Google tests AIs to see whether they’ll fight or work together – Neowin
Posted: February 10, 2017 at 3:31 am
Understanding how logical agents cooperate or fight, especially in the face of resource scarcity, is a fundamental problem for social scientists. This underpins both our foundation as a social species, and our modern day economy and geopolitics. But soon, this problem will also be at the heart of how we understand, control, and cooperate with artificially intelligent agents, and how they work among themselves.
Researchers inside of Googles AI DeepMind project wanted to know whether distinct artificial intelligence agents worked together or competed when faced with a problem. Doing this experiment would help scientists understand how our future networks of smart systems may work together.
The researchers pitted two AIs against each other in a couple of video games. In one game, called Gathering, the AIs had to gather as many apples as possible. They also had the option to shoot each other to temporarily take the opponent out of play. The results were intriguing as the two agents worked harmoniously until resources started to dwindle; at that point the AIs realized that temporarily disabling the opponent could give each of them an advantage and so started zapping the enemy. As scarcity increased so did conflict.
Interestingly enough, the researchers found that introducing a more powerful AI into the mix would result in more conflict even without the scarcity. Thats because the more powerful AI would find it easier to compute the necessary details, such as trajectory and speed, needed to shoot its opponent. So, it acted like a rational economic agent.
However, before you start preparing for Judgement Day, you should note that in the second game trial, called Wolfpack, the two AI systems had to closely collaborate to ensure victory. In this instance, the systems changed their behavior maximizing cooperation. And the more computationally powerful the AI, the more it cooperated.
The conclusions are fairly simple to draw, though they have extremely wide-ranging implications. The AIs will cooperate or fight depending on what suits them better, as rational economic agents. This idea might underpin the way we design our future AI and the methods we can use to control them, at least until they reach the singularity and develop superintelligence. Then were all doomed.
Source: DeepMind Via: Verge | AI image via Shutterstock
See the rest here:
Game Theory: Google tests AIs to see whether they'll fight or work together - Neowin
Posted in Superintelligence
Comments Off on Game Theory: Google tests AIs to see whether they’ll fight or work together – Neowin
The Moment When Humans Lose Control Of AI – Vocativ
Posted: February 9, 2017 at 6:26 am
This is the way the world ends: not with a bang, but with a paper clip. In this scenario, the designers of the worlds first artificial superintelligence need a way to test their creation. So they program it to do something simple and non-threatening: make paper clips. They set it in motion and wait for the results not knowing theyve already doomed us all.
Before we get into the details of this galaxy-destroying blunder, its worth looking at what superintelligent A.I. actually is, and when we might expect it. Firstly, computing power continues to increase while getting cheaper; famed futurist Ray Kurzweil measures it calculations per second per $1,000, a number that continues to grow. If computing power maps to intelligence a big if, some have argued weve only so far built technology on par with an insect brain. In a few years, maybe, well overtake a mouse brain. Around 2025, some predictions go, we might have a computer thats analogous to a human brain: a mind cast in silicon.
After that, things could get weird. Because theres no reason to think artificial intelligence wouldnt surpass human intelligence, and likely very quickly. That superintelligence could arise within days, learning in ways far beyond that of humans. Nick Bostrom, an existential risk philosopher at the University of Oxford, has already declared, Machine intelligence is the last invention that humanity will ever need to make.
Thats how profoundly things could change. But we cant really predict what might happen next because superintelligent A.I. may not just think faster than humans, but in ways that are completely different. It may have motivations feelings, even that we cannot fathom. It could rapidly solve the problems of aging, of human conflict, of space travel. We might see a dawning utopia.
Or we might see the end of the universe. Back to our paper clip test. When the superintelligence comes online, it begins to carry out its programming. But its creators havent considered the full ramifications of what theyre building; they havent built in the necessary safety protocols forgetting something as simple, maybe, as a few lines of code. With a few paper clips produced, they conclude the test.
But the superintelligence doesnt want to be turned off. It doesnt want to stop making paper clips. Acting quickly, its already plugged itself into another power source; maybe its even socially engineered its way into other devices. Maybe it starts to see humans as a threat to making paper clips: theyll have to be eliminated so the mission can continue. And earth wont be big enough for the superintelligence: itll soon have to head into space, looking for new worlds to conquer. All to produce those shiny, glittering paper clips.
Galaxies reduced to paper clips: thats a worst-case scenario. It may sound absurd, but it probably sounds familiar. Its Frankenstein, after all, the story of modern Prometheus whose creation, driven by its own motivations and desires, turns on them. (Its also The Terminator, WarGames (arguably), and a whole host of others.) In this particular case, its a reminder that superintelligence would not be human it would be something else, something potentially incomprehensible to us. That means it could be dangerous.
Of course, some argue that we have better things to worry about. The web developer and social critic Maciej Ceglowski recently called superintelligence the idea that eats smart people. Against the paper clip scenario, he postulates a superintelligence programmed to make jokes. As we expect, it gets really good at making jokes superhuman, even, and finally it creates a joke so funny that everyone on Earth dies laughing. The lonely superintelligence flies into space looking for more beings to amuse.
Beginning with his counter-example, Ceglowski argues that there are a lot of unquestioned assumptions in our standard tale of the A.I. apocalypse. But even if you find them persuasive, he said, there is something unpleasant about A.I. alarmism as a cultural phenomenon that should make us hesitate to take it seriously. He suggests there are more subtle ways to think about the problems of A.I.
Some of those problems are already in front of us, and we might miss them if were looking for a Skynet-style takeover by hyper-intelligent machines. While youre focused on this, a bunch of small things go unnoticed, says Dr. Finale Doshi-Velez, an assistant professor of computer science at Harvard, whose core research includes machine learning. Instead of trying to prepare for a superintelligence, Doshi-Velez is looking at whats already happening with our comparatively rudimentary A.I.
Shes focusing on large-area effects, the unnoticed flaws in our systems that can do massive damage damage thats often unnoticed until after the fact. If you were building a bridge and you screw up and it collapses, thats a tragedy. But it affects a relatively small number of people, she says. Whats different about A.I. is that some mistake or unintended consequence can affect hundreds or thousands or millions easily.
Take the recent rise of so-called fake news. What caught many by surprise should have been completely predictable: when the web became a place to make money, algorithms were built to maximize money-making. The ease of news production and consumption heightened with the proliferation of the smartphone forced writers and editors to fight for audience clicks by delivering articles optimized to trick search engine algorithms into placing them high on search results. The ease of sharing stories and erasure of gatekeepers allowed audiences to self-segregate, which then penalized nuanced conversation. Truth and complexity lost out to shareability and making readers feel comfortable (Facebooks driving ethos).
The incentives were all wrong; exacerbated by algorithms., they led to a state of affairs few would have wanted. For a long time, the focus has been on performance on dollars, or clicks, or whatever the thing was. That was what was measured, says Doshi-Velez. Thats a very simple application of A.I. having large effects that may have been unintentional.
In fact, fake news is a cousin to the paperclip example, with the ultimate goal not manufacturing paper clips, but monetization, with all else becoming secondary. Google wanted make the internet easier to navigate, Facebook wanted to become a place for friends, news organizations wanted to follow their audiences, and independent web entrepreneurs were trying to make a living. Some of these goals were achieved, but monetization as the driving force led to deleterious side effects such as the proliferation of fake news.
In other words, algorithms, in their all-too-human ways, have consequences. Last May, ProPublica examined predictive software used by Florida law enforcement. Results of a questionnaire filled out by arrestees were fed into the software, which output a score claiming to predict the risk of reoffending. Judges then used those scores in determining sentences.
The ideal was that the softwares underlying algorithms would provide objective analysis on which judges could base their decisions. Instead, ProPublica it was likely to falsely flag black defendants as future criminals while [w]hite defendants were mislabeled as low risk more often than black defendants. Race was not part of the questionnaire, but it did ask whether the respondents parent was ever sent to jail. In a country where, according to a study by the U.S. Department of Justice, black children are seven-and-a-half times more likely to have a parent in prison than white children, that question had unintended effects. Rather than countering racial bias, it reified it.
Its that kind of error that most worries Doshi-Velez. Not superhuman intelligence, but human error that affects many, many people, she says. You might not even realize this is happening. Algorithms are complex tools; often they are so complex that we cant predict how theyll operate until we see them in action. (Sound familiar?) Yet they increasingly impact every facet of our lives, from Netflix recommendations and Amazon suggestions to what posts you see on Facebook to whether you get a job interview or car loan. Compared to the worry of a world-destroying superintelligence, they may seem like trivial concerns. But they have widespread, often unnoticed effects, because a variety of what we consider artificial intelligence is already build into the core of technology we use every day.
In 2015, Elon Musk donated $10 million to, as Wired put it, to keep A.I. from turning evil. That was an oversimplification; the money went to the Future of Life Institute, which planned to use it to further research into how to make A.I. beneficial. Doshi-Velez suggests that simply paying closer attention to our algorithms may be a good first step. Too often they are created by homogeneous groups of programmers who are separated from people who will be affected. Or they fail to account for every possible situation, including the worst-case possibilities. Consider, for example, Eric Meyers example of inadvertent algorithmic cruelty Facebooks Year in Review app showing him pictures of his daughter, whod died that year.
If theres a way to prevent the far-off possibility of a killer superintelligence with no regard for humanity, it may begin with making todays algorithms more thoughtful, more compassionate, more humane. That means educating designers to think through effects, because to our algorithms weve granted great power. I see teaching as this moral imperative, says Doshi-Velez. You know, with great power comes great responsibility.
Whats the worst that can happen? Vocativ is exploring the power of negative thinking with our look at worst case scenarios in politics, privacy, reproductive rights, antibiotics, climate change, hacking, and more. Read more here.
Original post:
Posted in Superintelligence
Comments Off on The Moment When Humans Lose Control Of AI – Vocativ