The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Superintelligence
SoftBank’s Fantastical Future Still Rooted in the Now – Wall Street Journal
Posted: February 9, 2017 at 6:26 am
SoftBank's Fantastical Future Still Rooted in the Now Wall Street Journal SoftBank's founder Masayoshi Son talked about preparing his company for the next 300 years and used futuristic jargon such as singularity, Internet of Things and superintelligence during its results briefing. But more mundane issues will affect ... |
See the article here:
SoftBank's Fantastical Future Still Rooted in the Now - Wall Street Journal
Posted in Superintelligence
Comments Off on SoftBank’s Fantastical Future Still Rooted in the Now – Wall Street Journal
Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles … – Inverse
Posted: February 6, 2017 at 3:38 pm
Artificial intelligence is an amazing technology thats changing the world in fantastic ways, but anybody who has ever seen the movie Terminator knows that there are some dangers associated with advanced A.I. Thats why Elon Musk, Stephen Hawking, and hundreds of other researchers, tech leaders, and scientists have endorsed a list of 23 guiding principles that should steer A.I. development in a productive, ethical, and safe direction.
The Asilomar A.I. Principles were developed after the Future of Life Institute brought dozens of experts together for their Beneficial A.I. 2017 conference. The experts, whose ranks consisted of roboticists, physicists, economists, philosophers, and more had fierce debates about A.I. safety, economic impact on human workers, and programming ethics, to name a few. In order to make the final list, 90 percent of the experts had to agree on its inclusion.
What remained was a list of 23 principles ranging from research strategies to data rights to future issues including potential super-intelligence, which was signed by those wishing to associate their name with the list, Future of Lifes website explains. This collection of principles is by no means comprehensive and its certainly open to differing interpretations, but it also highlights how the current default behavior around many relevant issues could violate principles that most participants agreed are important to uphold.
Since then, 892 A.I. or Robotics researchers and 1445 others experts, including Tesla CEO Elon Musk and famed physicist Stephen Hawking, have endorsed the principles.
Some of the principles like transparency and open research sharing among competitive companies seem less likely than others. Even if theyre not fully implemented, the 23 principles could go a long way towards improving A.I. development and ensuring that its ethical and preventing the rise of Skynet.
1. Research Goal: The goal of A.I. research should be to create not undirected intelligence, but beneficial intelligence.
2. Research Funding: Investments in A.I. should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:
3. Science-Policy Link: There should be constructive and healthy exchange between A.I. researchers and policy-makers.
4. Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of A.I.
5. Race Avoidance: Teams developing A.I. systems should actively cooperate to avoid corner-cutting on safety standards.
6. Safety: A.I. systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7. Failure Transparency: If an A.I. system causes harm, it should be possible to ascertain why.
8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9. Responsibility: Designers and builders of advanced A.I. systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10. Value Alignment: Highly autonomous A.I. systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
11. Human Values: A.I. systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12. Personal Privacy: People should have the right to access, manage and control the data they generate, given A.I. systems power to analyze and utilize that data.
13. Liberty and Privacy: The application of A.I. to personal data must not unreasonably curtail peoples real or perceived liberty.
14 Shared Benefit: A.I. technologies should benefit and empower as many people as possible.
15. Shared Prosperity: The economic prosperity created by A.I.I should be shared broadly, to benefit all of humanity.
16. Human Control: Humans should choose how and whether to delegate decisions to A.I. systems, to accomplish human-chosen objectives.
17. Non-subversion: The power conferred by control of highly advanced A.I. systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
18. A.I. Arms Race: An arms race in lethal autonomous weapons should be avoided.
19. Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future A.I. capabilities.
20. Importance: Advanced A.I. could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
21. Risks: Risks posed by A.I. systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
22. Recursive Self-Improvement: A.I. systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
23. Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.
Photos via Getty Images
James Grebey is a writer, reporter, and fairly decent cartoonist living in Brooklyn. He's written for SPIN Magazine, BuzzFeed, MAD Magazine, and more. He thinks Double Stuf Oreos are bad and he's ready to die on this hill. James is the weeknights editor at Inverse because content doesn't sleep.
Continued here:
Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse
Posted in Superintelligence
Comments Off on Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles … – Inverse
Elon Musk’s Surprising Reason Why Everyone Will Be Equal in the … – Big Think
Posted: at 3:38 pm
A fascinating conference on artificial intelligence was recently hosted by the Future of Life Institute, an organization aimed at promoting optimistic visions of the future. The conference Superintelligence: Science or Fiction? includedsuch luminaries as Elon Musk of Tesla Motors and SpaceX, futurist Ray Kurzweil, Demis Hassabis of MITs DeepMind, neuroscientist and author Sam Harris, philosopher Nick Bostrom, philosopher and cognitive scientist David Chalmers, Skype co-founder Jaan Tallinn, as well as computer scientists Stuart Russell and Bart Selman. The discussion was led by MIT cosmologist Max Tegmark.
The group touched on a number of topics about the future benefits and risks of coming artificial superintelligence, with everyone generally agreeing that its only a matter of time before AI becomes ubiquitous in our lives. Eventually, AI will surpass human intelligence, with the risks and transformations that such a seismic event would entail.
Elon Musk has been a positive voice for AI, a stance not surprising for someone leading the charge to make automated cars our daily reality. He sees the AI future as inevitable, with dangers to be mitigated through government regulation, as much as he doesnt like the idea of them being a bit of a buzzkill.
He also brings up an interesting perspective that our fears of the technological changes the future will bring are largely irrelevant. According to Musk, we are already cyborgs by utilizing machine extensions of ourselves like phones and computers.
By far you have more power, more capability, than the President of the United States had 30 years ago. If you have an Internet link you have an article of wisdom, you can communicate to millions of people, you can communicate to the rest of Earth instantly. I mean, these are magical powers that didnt exist, not that long ago. So everyone is already superhuman, and a cyborg, says Musk [at 33:56].
He sees humans as information-processing machines that pale in comparison to the powers of a computer. What is necessary, according to Musk, is to create a greater integration between man and machine, specifically altering our brains with technology to make them more computer-like.
I think the two things that are needed for a future that we would look at and conclude is good, most likely, is, we have to solve that bandwidth constraint with a direct neural interface. I think a high bandwidth interface to the cortex, so that we can have a digital tertiary layer thats more fully symbiotic with the rest of us. Weve got the cortex and the limbic system, which seem to work together pretty well - theyve got good bandwidth, whereas the bandwidth to additional tertiary layer is weak, explained Musk [at 35:05]
Once we solve that issue, AI will spread everywhere. Its important to do so because, according to Musk, if only a smaller group would have such capabilities, they would become dictators with dominion over Earth.
What would a world filled with such cyborgs look like? Visions of Star Treks Borg come to mind.
Musk thinks it will be a society full of equals:
And if we do those things, then it will be tied to our consciousness, tied to our will, tied to the sum of individual human will, and everyone would have it so it would be sort of still a relatively even playing field, in fact, it would be probably more egalitarian than today, points out Musk [at 36:38].
The whole conference is immensely fascinating and worth watching in full. Check it out here:
Read the rest here:
Elon Musk's Surprising Reason Why Everyone Will Be Equal in the ... - Big Think
Posted in Superintelligence
Comments Off on Elon Musk’s Surprising Reason Why Everyone Will Be Equal in the … – Big Think
Experts have come up with 23 guidelines to avoid an AI apocalypse … – ScienceAlert
Posted: at 3:38 pm
It's the stuff of many a sci-fi book or movie - could robots one day become smart enough to overthrow us? Well, a group of the world's most eminent artificial intelligence experts have worked together to try and make sure that doesn't happen.
They've put together a set of 23 principles to guide future research into AI, which have since been endorsed by hundreds more professionals, including Stephen Hawking and SpaceX CEO Elon Musk.
Called the Asilomar AI Principles (after the beach in California, where they were thought up), the guidelines cover research issues, ethics and values, and longer-term issues - everything from how scientists should work with governments to how lethal weapons should be handled.
On that point: "An arms race in lethal autonomous weapons should be avoided," says principle 18. You can read the full listbelow.
"We hope that these principles will provide material for vigorous discussion and also aspirational goals for how the power of AI can be used to improve everyone's lives in coming years," write the organisers of the Beneficial AI 2017 conference, where the principles were worked out.
For a principle to be included, at least 90 percent of the 100+ conference attendees had to agree to it. Experts at the event included academics, engineers, and representatives from tech companies, including Google co-founder Larry Page.
Perhaps the most telling guideline is principle 23, entitled 'Common Good': "Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organisation."
Other principles in the list suggest that any AI allowed to self-improve must be strictly monitored, and that developments in the tech should be "shared broadly" and "benefit all of humanity".
"To think AI merely automates human decisions is like thinking electricity is just a replacement for candles," conference attendee Patrick Lin, from California Polytechnic State University, told George Dvorsky at Gizmodo.
"Given the massive potential for disruption and harm, as well as benefits, it makes sense to establish some norms early on to help steer research in a good direction, as opposed to letting a market economy that's fixated mostly on efficiency and profit... shape AI."
Meanwhile the principles also call for scientists to work closely with governments and lawmakers to make sure our society keeps pace with the development of AI.
All of which sounds very good to us - let's just hope the robots are listening.
The guidelines also rely on a certain amount of consensus about specific terms - such as what's beneficial to humankind and what isn't - but for the experts behind the list it's a question of getting something recorded at this early stage of AI research.
With artificial intelligence systems now beating us at poker and getting smart enough to spot skin cancers, there's a definite need to have guidelines and limits in place that researchers can work to.
And then we also need to decide what rights super-smart robots have when they're living among us.
For now the guidelines should give us some helpful pointers for the future.
"No current AI system is going to 'go rogue' and be dangerous, and it's important that people know that," conference attendee Anthony Aguirre, from the University of California, Santa Cruz, told Gizmodo.
"At the same time, if we envision a time when AIs exist that are as capable or more so than the smartest humans, it would be utterly naive to believe that the world will not fundamentally change."
"So how seriously we take AI's opportunities and risks has to scale with how capable it is, and having clear assessments and forecasts - without the press, industry or research hype that often accompanies advances - would be a good starting point."
The principles have been published by the Future Of Life Institute.
You can see them in full and add your support over on their site.
Research issues
1. Research Goal:The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
2. Research Funding:Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:
3. Science-Policy Link:There should be constructive and healthy exchange between AI researchers and policy-makers.
4. Research Culture:A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.
5. Race Avoidance:Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.
Ethics and values
6. Safety:AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7. Failure Transparency:If an AI system causes harm, it should be possible to ascertain why.
8. Judicial Transparency:Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9. Responsibility:Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10. Value Alignment:Highly autonomous AI systems should be designed so that their goals and behaviours can be assured to align with human values throughout their operation.
11. Human Values:AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12. Personal Privacy:People should have the right to access, manage and control the data they generate, given AI systems power to analyse and utilise that data.
13. Liberty and Privacy:The application of AI to personal data must not unreasonably curtail peoples real or perceived liberty.
14. Shared Benefit:AI technologies should benefit and empower as many people as possible.
15. Shared Prosperity:The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
16. Human Control:Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
17. Non-subversion:The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
18. AI Arms Race:An arms race in lethal autonomous weapons should be avoided.
Longer term issues
19. Capability Caution:There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
20. Importance:Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
21. Risks:Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
22. Recursive Self-Improvement:AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
23. Common Good:Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organisation.
Original post:
Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert
Posted in Superintelligence
Comments Off on Experts have come up with 23 guidelines to avoid an AI apocalypse … – ScienceAlert
Will Machines Ever Outthink Us? – Huffington Post
Posted: at 3:38 pm
This post is hosted on the Huffington Post's Contributor platform. Contributors control their own work and post freely to our site. If you need to flag this entry as abusive, send us an email.
As Artificial Intelligence (AI) evolves by becoming smarter and more creative, will machines ever outthink us? How this question is answered may determine whether society will ultimately accept the further evolution of AI, or demand that it is stopped an outcome not dissimilar to the banning of human cloning. Many have argued about the inevitability of artificial superinteligence, by suggesting that once a machine becomes capable of self-learning and setting its own goals it will teleologically surpass the capability of any human brain. If this argument is true then we ought to be fearful of AI ever reaching that level, for who knows what will happen after that. Perhaps superintelligent AIs will decide the extermination of humans. But should we trust this argument for AI superiniteligence to be true?
George Zarkadakis
There is a fundamental philosophical assumption made by those who believe that artificial superintelligence is inevitable, and we need to examine it carefully. As I have argued in my book In Our Own Image: the history and future of Artificial Intelligence, to believe that a machine can be intelligent - in the same way that a human is intelligent - means that you take for granted that intelligence is something independent of the physicality of the brain. That bits and atoms are two different worlds; and that intelligence is all about bits and not about atoms. This is an idea that has roots in the philosophy of Plato. Plato believed that physical forms (e.g. a brain) are projections of non-physical ideals; and that those non-physical, immaterial, ideals are what constitute the ultimate truth. Applying this Platonic idea to Artificial Intelligence is what mainstream AI researchers do. They believe that it is possible to decode intelligence by studying the human brain; and then transfer the decoded pattern (the ideal, the bits, the master algorithm) to any other physicality they wish, for instance the hardware of a computational machine. For them the decoded pattern of intelligence can be discovered, given enough intellect and investment, just like mathematics is discovered according to Platonists because it always pre-exists. This worldview of intelligence is the foundation of the so-called computational theory of mind. If this theory is true, then it is indeed possible to create superintelligent machines, or rather superintelligent programs that can run on any general-purpose computational architecture.
The challenge to the computational theory of mind comes from an Aristotelian, or empirical, view of the world. In this worldview the form is the physical; there is no external, ideal, non-materialistic world. For Aristotelians the question whether mathematics is invented or discovered is answered emphatically as invented. Numbers do not pre-exist. It is the action of enumerating physical objects that requires the invention of numbers. If one takes the empiricist view then intelligence is a biological phenomenon, and not a mathematical phenomenon. It can be simulated in a computer but it cannot be replicated. To replicate intelligence in a computer would be similar to replicating say metabolism, or reproduction, which are also biological phenomena.
As I argued in my book, I am resolutely siding with the Aristotelians, however a minority they may be in the debate over the future, and nature, of Artificial Intelligence. I do so not because of some deep-seated materialistic conviction, but because it seems to me that when we speak about intelligence we often miss, or purposely ignore, how inseparable this concept is to consciousness. By intelligence I mean how competent an organism is in finding a novel solution to a new problem; and by consciousness its level of self-awareness or comprehension of its actions and internal states. But let me explain more why I am an Aristotelian when it comes to AI.
When I look at the natural world I see intelligence and consciousness as one, manifesting in varying degrees over the wide spectrum of life that begins with unicellular organisms and ends with more complex creatures such as dolphins, whales, octopuses, and primates. What I see in that spectrum is how awareness emerges out of biological automation. The level of awareness seems dependent on the number and sophistication of feedback loops inside a biological organism. Nature allows the evolution towards increasing levels of self-awareness because, for some species, higher levels of self-awareness provide significant survival advantages. We humans are not the only species with self-awareness, although we seem to be the species with the highest level of self-awareness. Perhaps the reason why we have this biological function so developed is because it is necessary in order to create civilizations, which in turn allow for more degrees of freedom for inventing strategies and technology for survival.
If my side of the argument is true, then it is impossible to decode biological intelligence in an artificial artefact. At best, one can only simulate some aspect of intelligence but never the whole thing. To have the whole thing, or something greater, you need biology and evolution. You need atoms and molecules. Maths and algorithms would never be enough. Nevertheless, as Turing demonstrated, you can have competence without comprehension. Intelligent machines will probably outthink us in nearly everything, except comprehension of what is that they are being competent of; for which consciousness is sine qua non.
Join me in debating Will Machines Ever Outthink Us? at Mathscon, at Imperial College, London on February 11, 2017.
Read the original:
Posted in Superintelligence
Comments Off on Will Machines Ever Outthink Us? – Huffington Post
Superintelligence: The Idea That Eats Smart People
Posted: December 26, 2016 at 3:12 pm
The Idea That Eats Smart People
In 1945, as American physicists were preparing to test the atomic bomb, it occurred to someone to ask if such a test could set the atmosphere on fire.
This was a legitimate concern. Nitrogen, which makes up most of the atmosphere, is not energetically stable. Smush two nitrogen atoms together hard enough and they will combine into an atom of magnesium, an alpha particle, and release a whole lot of energy:
N14 + N14 Mg24 + + 17.7 MeV
The vital question was whether this reaction could be self-sustaining. The temperature inside the nuclear fireball would be hotter than any event in the Earth's history. Were we throwing a match into a bunch of dry leaves?
Los Alamos physicists performed the analysis and decided there was a satisfactory margin of safety. Since we're all attending this conference today, we know they were right. They had confidence in their predictions because the laws governing nuclear reactions were straightforward and fairly well understood.
Today we're building another world-changing technology, machine intelligence. We know that it will affect the world in profound ways, change how the economy works, and have knock-on effects we can't predict.
But there's also the risk of a runaway reaction, where a machine intelligence reaches and exceeds human levels of intelligence in a very short span of time.
At that point, social and economic problems would be the least of our worries. Any hyperintelligent machine (the argument goes) would have its own hypergoals, and would work to achieve them by manipulating humans, or simply using their bodies as a handy source of raw materials.
Last year, the philosopher Nick Bostrom published Superintelligence, a book that synthesizes the alarmist view of AI and makes a case that such an intelligence explosion is both dangerous and inevitable given a set of modest assumptions.
The computer that takes over the world is a staple scifi trope. But enough people take this scenario seriously that we have to take them seriously. Stephen Hawking, Elon Musk, and a whole raft of Silicon Valley investors and billionaires find this argument persuasive.
Let me start by laying out the premises you need for Bostrom's argument to go through:
The first premise is the simple observation that thinking minds exist.
We each carry on our shoulders a small box of thinking meat. I'm using mine to give this talk, you're using yours to listen. Sometimes, when the conditions are right, these minds are capable of rational thought.
So we know that in principle, this is possible.
The second premise is that the brain is an ordinary configuration of matter, albeit an extraordinarily complicated one. If we knew enough, and had the technology, we could exactly copy its structure and emulate its behavior with electronic components, just like we can simulate very basic neural anatomy today.
Put another way, this is the premise that the mind arises out of ordinary physics. Some people like Roger Penrose would take issue with this argument, believing that there is extra stuff happening in the brain at a quantum level.
If you are very religious, you might believe that a brain is not possible without a soul.
But for most of us, this is an easy premise to accept.
The third premise is that the space of all possible minds is large.
Our intelligence level, cognitive speed, set of biases and so on is not predetermined, but an artifact of our evolutionary history.
In particular, there's no physical law that puts a cap on intelligence at the level of human beings.
A good way to think of this is by looking what happens when the natural world tries to maximize for speed.
If you encountered a cheetah in pre-industrial times (and survived the meeting), you might think it was impossible for anything to go faster.
But of course we know that there are all kinds of configurations of matter, like a motorcycle, that are faster than a cheetah and even look a little bit cooler.
But there's no direct evolutionary pathway to the motorcycle. Evolution had to first make human beings, who then build all kinds of useful stuff.
So analogously, there may be minds that are vastly smarter than our own, but which are just not accessible to evolution on Earth. It's possible that we could build them, or invent the machines that can invent the machines that can build them.
There's likely to be some natural limit on intelligence, but there's no a priori reason to think that we're anywhere near it. Maybe the smartest a mind can be is twice as smart as people, maybe it's sixty thousand times as smart.
That's an empirical question that we don't know how to answer.
The fourth premise is that there's still plenty of room for computers to get smaller and faster.
If you watched the Apple event last night [where Apple introduced its 2016 laptops], you may be forgiven for thinking that Moore's Law is slowing down. But this premise just requires that you believe smaller and faster hardware to be possible in principle, down to several more orders of magnitude.
We know from theory that the physical limits to computation are high. So we could keep doubling for decades more before we hit some kind of fundamental physical limit, rather than an economic or political limit to Moore's Law.
The penultimate premise is if we create an artificial intelligence, whether it's an emulated human brain or a de novo piece of software, it will operate at time scales that are characteristic of electronic hardware (microseconds) rather than human brains (hours).
To get to the point where I could give this talk, I had to be born, grow up, go to school, attend university, live for a while, fly here and so on. It took years. Computers can work tens of thousands of times more quickly.
In particular, you have to believe that an electronic mind could redesign itself (or the hardware it runs on) and then move over to the new configuration without having to re-learn everything on a human timescale, have long conversations with human tutors, go to college, try to find itself by taking painting classes, and so on.
The last premise is my favorite because it is the most unabashedly American premise. (This is Tony Robbins, a famous motivational speaker.)
According to this premise, whatever goals an AI had (and they could be very weird, alien goals), it's going to want to improve itself. It's going to want to be a better AI.
So it will find it useful to recursively redesign and improve its own systems to make itself smarter, and possibly live in a cooler enclosure.
And by the time scale premise, this recursive self-improvement could happen very quickly.
If you accept all these premises, what you get is disaster!
Because at some point, as computers get faster, and we program them to be more intelligent, there's going to be a runaway effect like an explosion.
As soon as a computer reaches human levels of intelligence, it will no longer need help from people to design better versions of itself. Instead, it will start doing on a much faster time scale, and it's not going to stop until it hits a natural limit that might be very many times greater than human intelligence.
At that point this monstrous intellectual creature, through devious modeling of what our emotions and intellect are like, will be able to persuade us to do things like give it access to factories, synthesize custom DNA, or simply let it connect to the Internet, where it can hack its way into anything it likes and completely obliterate everyone in arguments on message boards.
From there things get very sci-fi very quickly.
Let imagine a specific scenario where this could happen. Let's say I want to built a robot to say funny things.
I work on a team and every day day we redesign our software, compile it, and the robot tells us a joke.
In the beginning, the robot is barely funny. It's at the lower limits of human capacity:
What's grey and can't swim?
A castle.
But we persevere, we work, and eventually we get to the point where the robot is telling us jokes that are starting to be funny:
I told my sister she was drawing her eyebrows too high.
She looked surprised.
At this point, the robot is getting smarter as well, and participates in its own redesign.
It now has good instincts about what's funny and what's not, so the designers listen to its advice. Eventually it gets to a near-superhuman level, where it's funnier than any human being around it.
My belt holds up my pants and my pants have belt loops that hold up my belt.
What's going on down there?
Who is the real hero?
This is where the runaway effect kicks in. The researchers go home for the weekend, and the robot decides to recompile itself to be a little bit funnier and a little bit smarter, repeatedly.
It spends the weekend optimizing the part of itself that's good at optimizing, over and over again. With no more need for human help, it can do this as fast as the hardware permits.
When the researchers come in on Monday, the AI has become tens of thousands of times funnier than any human being who ever lived. It greets them with a joke, and they die laughing.
In fact, anyone who tries to communicate with the robot dies laughing, just like in the Monty Python skit. The human species laughs itself into extinction.
To the few people who manage to send it messages pleading with it to stop, the AI explains (in a witty, self-deprecating way that is immediately fatal) that it doesn't really care if people live or die, its goal is just to be funny.
Finally, once it's destroyed humanity, the AI builds spaceships and nanorockets to explore the farthest reaches of the galaxy, and find other species to amuse.
This scenario is a caricature of Bostrom's argument, because I am not trying to convince you of it, but vaccinate you against it.
Here's a PBF comic with the same idea. You see that hugbot, who has been programmed to hug the world, finds a way to wire a nucleo-gravitational hyper crystal into his hug capacitor and destroys the Earth.
Observe that in these scenarios the AIs are evil by default, just like a plant on an alien planet would probably be poisonous by default. Without careful tuning, there's no reason that an AI's motivations or values would resemble ours.
For an artificial mind to have anything resembling a human value system, the argument goes, we have to bake those beliefs into the design.
AI alarmists are fond of the paper clip maximizer, a notional computer that runs a paper clip factory, becomes sentient, recursively self-improves to Godlike powers, and then devotes all its energy to filling the universe with paper clips.
It exterminates humanity not because it's evil, but because our blood contains iron that could be better used in paper clips.
So if we just build an AI without tuning its values, the argument goes, one of the first things it will do is destroy humanity.
There's a lot of vivid language around such a takeover would happen. Nick Bostrom imagines a scenario where a program has become sentient, is biding its time, and has secretly built little DNA replicators. Then, when it's ready:
So that's kind of freaky!
The only way out of this mess is to design a moral fixed point, so that even through thousands and thousands of cycles of self-improvement the AI's value system remains stable, and its values are things like 'help people', 'don't kill anybody', 'listen to what people want'.
Basically, "do what I mean".
Here's a very poetic example from Eliezer Yudkowsky of the good old American values we're supposed to be teaching to our artificial intelligence:
How's that for a design document? Now go write the code.
Hopefully you see the resemblance between this vision of AI and a genie from folklore. The AI is all-powerful and gives you what you ask for, but interprets everything in a super-literal way that you end up regretting.
This is not because the genie is stupid (it's hyperintelligent!) or malicious, but because you as a human being made too many assumptions about how minds behave. The human value system is idiosyncratic and needs to be explicitly defined and designed into any "friendly" machine.
Doing this is the ethics version of the early 20th century attempt to formalize mathematics and put it on a strict logical foundation. That this program ended in disaster for mathematical logic is never mentioned.
When I was in my twenties, I lived in Vermont, a remote, rural state. Many times I would return from some business trip on an evening flight, and have to drive home for an hour through the dark forest.
I would listen to a late-night radio program hosted by Art Bell, who had an all-night talk show and would interview various conspiracy theorists and fringe thinkers.
I would arrive at home totally freaked out, or pull over under a streetlight, convinced that a UFO was about to abduct me. I learned that I am an incredibly persuadable person.
It's the same feeling I get when I read these AI scenarios.
So I was delighted some years later to come across an essay by Scott Alexander about what he calls epistemic learned helplessness.
Epistemology is one of those big words, but all it means is "how do you know what you know is true?". Alexander noticed that when he was a young man, he would be taken in by "alternative" histories he read by various crackpots. He would read the history and be utterly convinced, then read the rebuttal and be convinced by that, and so on.
At some point he noticed was these alternative histories were mutually contradictory, so they could not possibly all be true. And from that he reasoned that he was simply somebody who could not trust his judgement. He was too easily persuaded.
People who believe in superintelligence present an interesting case, because many of them are freakishly smart. They can argue you into the ground. But are their arguments right, or is there just something about very smart minds that leaves them vulnerable to religious conversion about AI risk, and makes them particularly persuasive?
Is the idea of "superintelligence" just a memetic hazard?
When you're evaluating persuasive arguments about something strange, there are two perspectives you can choose, the inside one or the outside one.
Say that some people show up at your front door one day wearing funny robes, asking you if you will join their movement. They believe that a UFO is going to visit Earth two years from now, and it is our task to prepare humanity for the Great Upbeaming.
The inside view requires you to engage with these arguments on their merits. You ask your visitors how they learned about the UFO, why they think it's coming to get usall the normal questions a skeptic would ask in this situation.
Imagine you talk to them for an hour, and come away utterly persuaded. They make an ironclad case that the UFO is coming, that humanity needs to be prepared, and you have never believed something as hard in your life as you now believe in the importance of preparing humanity for this great event.
But the outside view tells you something different. These people are wearing funny robes and beads, they live in a remote compound, and they speak in unison in a really creepy way. Even though their arguments are irrefutable, everything in your experience tells you you're dealing with a cult.
Of course, they have a brilliant argument for why you should ignore those instincts, but that's the inside view talking.
The outside view doesn't care about content, it sees the form and the context, and it doesn't look good.
So I'd like to engage AI risk from both these perspectives. I think the arguments for superintelligence are somewhat silly, and full of unwarranted assumptions.
But even if you find them persuasive, there is something unpleasant about AI alarmism as a cultural phenomenon that should make us hesitate to take it seriously.
First, let me engage the substance. Here are the arguments I have against Bostrom-style superintelligence as a risk to humanity:
The concept of "general intelligence" in AI is famously slippery. Depending on the context, it can mean human-like reasoning ability, or skill at AI design, or the ability to understand and model human behavior, or proficiency with language, or the capacity to make correct predictions about the future.
What I find particularly suspect is the idea that "intelligence" is like CPU speed, in that any sufficiently smart entity can emulate less intelligent beings (like its human creators) no matter how different their mental architecture.
More here:
Posted in Superintelligence
Comments Off on Superintelligence: The Idea That Eats Smart People
Superintelligence: Paths, Dangers, Strategies: Amazon.co …
Posted: November 17, 2016 at 6:40 pm
Review
I highly recommend this book (Bill Gates)
Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era (Stuart Russell, Professor of Computer Science, University of California, Berkley)
Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book (Martin Rees, Past President, Royal Society)
This superb analysis by one of the worlds clearest thinkers tackles one of humanitys greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesnt become the last? (Max Tegmark, Professor of Physics, MIT)
Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever (Olle Haggstrom, Professor of Mathematical Statistics)
Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking (The Economist)
There is no doubting the force of [Bostroms] arguments the problem is a research challenge worthy of the next generations best mathematical talent. Human civilisation is at stake (Financial Times)
His book Superintelligence: Paths, Dangers, Strategies became an improbable bestseller in 2014 (Alex Massie, Times (Scotland))
Ein Text so nchtern und cool, so angstfrei und dadurch umso erregender, dass danach das, was bisher vor allem Filme durchgespielt haben, auf einmal hchst plausibel erscheint. A text so sober and cool, so fearless and thus all the more exciting that what has until now mostly been acted through in films, all of a sudden appears most plausible afterwards. (translated from German) (Georg Diez, DER SPIEGEL)
Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes (Elon Musk, Founder of SpaceX and Tesla)
A damn hard read (Sunday Telegraph)
I recommend Superintelligence by Nick Bostrom as an excellent book on this topic (Jolyon Brown, Linux Format)
Every intelligent person should read it. (Nils Nilsson, Artificial Intelligence Pioneer, Stanford University)
Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009). He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.
Go here to see the original:
Superintelligence: Paths, Dangers, Strategies: Amazon.co ...
Posted in Superintelligence
Comments Off on Superintelligence: Paths, Dangers, Strategies: Amazon.co …
Superintelligence | Guardian Bookshop
Posted: October 27, 2016 at 12:05 pm
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence. This profoundly ambitious and original book picks its way carefully through a vast tract of forbiddingly difficult intellectual terrain. Yet the writing is so lucid that it somehow makes it all seem easy. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
More here:
Posted in Superintelligence
Comments Off on Superintelligence | Guardian Bookshop
The Artificial Intelligence Revolution: Part 2 – Wait But Why
Posted: at 12:05 pm
Note: This is Part 2 of a two-part series on AI. Part 1 is here.
PDF: We made a fancy PDF of this post for printing and offline viewing. Buy it here. (Or see a preview.)
___________
We have what may be an extremely difficult problem with an unknown time to solve it, on which quite possibly the entire future of humanity depends. Nick Bostrom
Welcome to Part 2 of the Wait how is this possibly what Im reading I dont get why everyone isnt talking about this series.
Part 1 started innocently enough, as we discussed Artificial Narrow Intelligence, or ANI (AI that specializes in one narrow task like coming up with driving routes or playing chess), and how its all around us in the world today. We then examined why it was such a huge challenge to get from ANI to Artificial General Intelligence, or AGI (AI thats at least as intellectually capable as a human, across the board), and we discussed why the exponential rate of technological advancement weve seen in the past suggests that AGI might not be as far away as it seems. Part 1 ended with me assaulting you with the fact that once our machines reach human-level intelligence, they might immediately do this:
This left us staring at the screen, confronting the intense concept of potentially-in-our-lifetime Artificial Superintelligence, or ASI (AI thats way smarter than any human, across the board), and trying to figure out which emotion we were supposed to have on as we thought about that.11 open these
Before we dive into things, lets remind ourselves what it would mean for a machine to be superintelligent.
A key distinction is the difference between speed superintelligence and quality superintelligence. Often, someones first thought when they imagine a super-smart computer is one thats as intelligent as a human but can think much, much faster2they might picture a machine that thinks like a human, except a million times quicker, which means it could figure out in five minutes what would take a human a decade.
That sounds impressive, and ASI would think much faster than any human couldbut the true separator would be its advantage in intelligence quality, which is something completely different. What makes humans so much more intellectually capable than chimps isnt a difference in thinking speedits that human brains contain a number of sophisticated cognitive modules that enable things like complex linguistic representations or longterm planning or abstract reasoning, that chimps brains do not. Speeding up a chimps brain by thousands of times wouldnt bring him to our leveleven with a decades time, he wouldnt be able to figure out how to use a set of custom tools to assemble an intricate model, something a human could knock out in a few hours. There are worlds of human cognitive function a chimp will simply never be capable of, no matter how much time he spends trying.
But its not just that a chimp cant do what we do, its that his brain is unable to grasp that those worlds even exista chimp can become familiar with what a human is and what a skyscraper is, but hell never be able to understand that the skyscraper was built by humans. In his world, anything that huge is part of nature, period, and not only is it beyond him to build a skyscraper, its beyond him to realize that anyone can build a skyscraper. Thats the result of a small difference in intelligence quality.
And in the scheme of the intelligence range were talking about today, or even the much smaller range among biological creatures, the chimp-to-human quality intelligence gap is tiny. In an earlier post, I depicted the range of biological cognitive capacity using a staircase:3
To absorb how big a deal a superintelligent machine would be, imagine one on the dark green step two steps above humans on that staircase. This machine would be only slightly superintelligent, but its increased cognitive ability over us would be as vast as the chimp-human gap we just described. And like the chimps incapacity to ever absorb that skyscrapers can be built, we will never be able to even comprehend the things a machine on the dark green step can do, even if the machine tried to explain it to uslet alone do it ourselves. And thats only two steps above us. A machine on the second-to-highest step on that staircase would be to us as we are to antsit could try for years to teach us the simplest inkling of what it knows and the endeavor would be hopeless.
But the kind of superintelligence were talking about today is something far beyond anything on this staircase. In an intelligence explosionwhere the smarter a machine gets, the quicker its able to increase its own intelligence, until it begins to soar upwardsa machine might take years to rise from the chimp step to the one above it, but perhaps only hours to jump up a step once its on the dark green step two above us, and by the time its ten steps above us, it might be jumping up in four-step leaps every second that goes by. Which is why we need to realize that its distinctly possible that very shortly after the big news story about the first machine reaching human-level AGI, we might be facing the reality of coexisting on the Earth with something thats here on the staircase (or maybe a million times higher):
And since we just established that its a hopeless activity to try to understand the power of a machine only two steps above us, lets very concretely state once and for all that there is no way to know what ASI will do or what the consequences will be for us.Anyone who pretends otherwise doesnt understand what superintelligence means.
Evolution has advanced the biological brain slowly and gradually over hundreds of millions of years, and in that sense, if humans birth an ASI machine, well be dramatically stomping on evolution. Or maybe this is part of evolutionmaybe the way evolution works is that intelligence creeps up more and more until it hits the level where its capable of creating machine superintelligence, and that level is like a tripwire that triggers a worldwide game-changing explosion that determines a new future for all living things:
And for reasons well discuss later, a huge part of the scientific community believes that its not a matter of whether well hit that tripwire, but when. Kind of a crazy piece of information.
So where does that leave us?
Well no one in the world, especially not I, can tell you what will happen when we hit the tripwire. But Oxford philosopher and lead AI thinker Nick Bostrom believes we can boil down all potential outcomes into two broad categories.
First, looking at history, we can see that life works like this: species pop up, exist for a while, and after some time, inevitably, they fall off the existence balance beam and land on extinction
All species eventually go extinct has been almost as reliable a rule through history as All humans eventually die has been. So far, 99.9% of species have fallen off the balance beam, and it seems pretty clear that if a species keeps wobbling along down the beam, its only a matter of time before some other species, some gust of natures wind, or a sudden beam-shaking asteroid knocks it off. Bostrom calls extinction an attractor statea place species are all teetering on falling into and from which no species ever returns.
And while most scientists Ive come across acknowledge that ASI would have the ability to send humans to extinction, many also believe that used beneficially, ASIs abilities could be used to bring individual humans, and the species as a whole, to a second attractor statespecies immortality. Bostrom believes species immortality is just as much of an attractor state as species extinction, i.e. if we manage to get there, well be impervious to extinction foreverwell have conquered mortality and conquered chance. So even though all species so far have fallen off the balance beam and landed on extinction, Bostrom believes there are two sides to the beam and its just that nothing on Earth has been intelligent enough yet to figure out how to fall off on the other side.
If Bostrom and others are right, and from everything Ive read, it seems like they really might be, we have two pretty shocking facts to absorb:
1) The advent of ASI will, for the first time, open up the possibility for a species to land on the immortality side of the balance beam.
2) The advent of ASI will make such an unimaginably dramatic impact that its likely to knock the human race off the beam, in one direction or the other.
It may very well be that when evolution hits the tripwire, it permanently ends humans relationship with the beam and creates a new world, with or without humans.
Kind of seems like the only question any human should currently be asking is: When are we going to hit the tripwire and which side of the beam will we land on when that happens?
No one in the world knows the answer to either part of that question, but a lot of the very smartest people have put decades of thought into it. Well spend the rest of this post exploring what theyve come up with.
___________
Lets start with the first part of the question: When are we going to hit the tripwire?
i.e. How long until the first machine reaches superintelligence?
Not shockingly, opinions vary wildly and this is a heated debate among scientists and thinkers. Many, like professor Vernor Vinge, scientist Ben Goertzel, Sun Microsystems co-founder Bill Joy, or, most famously, inventor and futurist Ray Kurzweil, agree with machine learning expert Jeremy Howard when he puts up this graph during a TED Talk:
Those people subscribe to the belief that this is happening soonthat exponential growth is at work and machine learning, though only slowly creeping up on us now, will blow right past us within the next few decades.
Others, like Microsoft co-founder Paul Allen, research psychologist Gary Marcus, NYU computer scientist Ernest Davis, and tech entrepreneur Mitch Kapor, believe that thinkers like Kurzweil are vastly underestimating the magnitude of the challenge and believe that were not actually that close to the tripwire.
The Kurzweil camp would counter that the only underestimating thats happening is the underappreciation of exponential growth, and theyd compare the doubters to those who looked at the slow-growing seedling of the internet in 1985 and argued that there was no way it would amount to anything impactful in the near future.
The doubters might argue back that the progress needed to make advancements in intelligence also grows exponentially harder with each subsequent step, which will cancel out the typical exponential nature of technological progress. And so on.
A third camp, which includes Nick Bostrom, believes neither group has any ground to feel certain about the timeline and acknowledges both A) that this could absolutely happen in the near future and B) that theres no guarantee about that; it could also take a much longer time.
Still others, like philosopher Hubert Dreyfus, believe all three of these groups are naive for believing that there even is a tripwire, arguing that its more likely that ASI wont actually ever be achieved.
So what do you get when you put all of these opinions together?
In 2013, Vincent C. Mller and Nick Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences the following question: For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such HLMI4 to exist? Itasked them to name an optimistic year (one in which they believe theres a 10% chance well have AGI), a realistic guess (a year they believe theres a 50% chance of AGIi.e. after that year they think its more likely than not that well have AGI), and a safe guess (the earliest year by which they can say with 90% certainty well have AGI). Gathered together as one data set, here were the results:2
Median optimistic year (10% likelihood): 2022Median realistic year (50% likelihood): 2040Median pessimistic year (90% likelihood): 2075
So the median participant thinks its more likely than not that well have AGI 25 years from now. The 90% median answer of 2075 means that if youre a teenager right now, the median respondent, along with over half of the group of AI experts, is almost certain AGI will happen within your lifetime.
A separate study, conducted recently by author James Barrat at Ben Goertzels annual AGI Conference, did away with percentages and simply asked when participants thought AGI would be achievedby 2030, by 2050, by 2100, after 2100, or never. The results:3
By 2030: 42% of respondentsBy 2050: 25% By 2100: 20%After 2100: 10% Never: 2%
Pretty similar to Mller and Bostroms outcomes. In Barrats survey, over two thirds of participants believe AGI will be here by 2050 and a little less than half predict AGI within the next 15 years. Also striking is that only 2% of those surveyed dont think AGI is part of our future.
But AGI isnt the tripwire, ASI is. So when do the experts think well reach ASI?
Mller and Bostrom also asked the experts how likely they think it is that well reach ASI A) within two years of reaching AGI (i.e. an almost-immediate intelligence explosion), and B) within 30 years. The results:4
The median answer put a rapid (2 year) AGI ASI transition at only a 10% likelihood, but a longer transition of 30 years or less at a 75% likelihood.
We dont know from this data the length of this transition the median participant would have put at a 50% likelihood, but for ballpark purposes, based on the two answers above, lets estimate that theyd have said 20 years. So the median opinionthe one right in the center of the world of AI expertsbelieves the most realistic guess for when well hit the ASI tripwire is [the 2040 prediction for AGI + our estimated prediction of a 20-year transition from AGI to ASI] = 2060.
Of course, all of the above statistics are speculative, and theyre only representative of the center opinion of the AI expert community, but it tells us that a large portion of the people who know the most about this topic would agree that 2060 is a very reasonable estimate for the arrival of potentially world-altering ASI. Only 45 years from now.
Okay now how about the second part of the question above: When we hit the tripwire, which side of the beam will we fall to?
Superintelligence will yield tremendous powerthe critical question for us is:
Who or what will be in control of that power, and what will their motivation be?
The answer to this will determine whether ASI is an unbelievably great development, an unfathomably terrible development, or something in between.
Of course, the expert community is again all over the board and in a heated debate about the answer to this question. Mller and Bostroms survey asked participants to assign a probability to the possible impacts AGI would have on humanity and found that the mean response was that there was a 52% chance that the outcome will be either good or extremely good and a 31% chance the outcome will be either bad or extremely bad. For a relatively neutral outcome, the mean probability was only 17%. In other words, the people who know the most about this are pretty sure this will be a huge deal. Its also worth noting that those numbers refer to the advent of AGIif the question were about ASI, I imagine that the neutral percentage would be even lower.
Before we dive much further into this good vs. bad outcome part of the question, lets combine both the when will it happen? and the will it be good or bad? parts of this question into a chart that encompasses the views of most of the relevant experts:
Well talk more about the Main Camp in a minute, but firstwhats your deal? Actually I know what your deal is, because it was my deal too before I started researching this topic. Some reasons most people arent really thinking about this topic:
One of the goals of these two posts is to get you out of the I Like to Think About Other Things Camp and into one of the expert camps, even if youre just standing on the intersection of the two dotted lines in the square above, totally uncertain.
During my research, I came across dozens of varying opinions on this topic, but I quickly noticed that most peoples opinions fell somewhere in what I labeled the Main Camp, and in particular, over three quarters of the experts fell into two Subcamps inside the Main Camp:
Were gonna take a thorough dive into both of these camps. Lets start with the fun one
As I learned about the world of AI, I found a surprisingly large number of people standing here:
The people on Confident Corner are buzzing with excitement. They have their sights set on the fun side of the balance beam and theyre convinced thats where all of us are headed. For them, the future is everything they ever could have hoped for, just in time.
The thing that separates these people from the other thinkers well discuss later isnt their lust for the happy side of the beamits their confidence that thats the side were going to land on.
Where this confidence comes from is up for debate. Critics believe it comes from an excitement so blinding that they simply ignore or deny potential negative outcomes. But the believers say its naive to conjure up doomsday scenarios when on balance, technology has and will likely end up continuing to help us a lot more than it hurts us.
Well cover both sides, and you can form your own opinion about this as you read, but for this section, put your skepticism away and lets take a good hard look at whats over there on the fun side of the balance beamand try to absorb the fact that the things youre reading might really happen. If you had shown a hunter-gatherer our world of indoor comfort, technology, and endless abundance, it would have seemed like fictional magic to himwe have to be humble enough to acknowledge that its possible that an equally inconceivable transformation could be in our future.
Nick Bostrom describes three ways a superintelligent AI system could function:6
These questions and tasks, which seem complicated to us, would sound to a superintelligent system like someone asking you to improve upon the My pencil fell off the table situation, which youd do by picking it up and putting it back on the table.
Eliezer Yudkowsky, a resident of Anxious Avenue in our chart above, said it well:
There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from impossible to obvious. Move a substantial degree upwards, and all of them will become obvious.7
There are a lot of eager scientists, inventors, and entrepreneurs in Confident Cornerbut for a tour of brightest side of the AI horizon, theres only one person we want as our tour guide.
Ray Kurzweil is polarizing. In my reading, I heard everything from godlike worship of him and his ideas to eye-rolling contempt for them. Others were somewhere in the middleauthor Douglas Hofstadter, in discussing the ideas in Kurzweils books, eloquently put forth that it is as if you took a lot of very good food and some dog excrement and blended it all up so that you cant possibly figure out whats good or bad.8
Whether you like his ideas or not, everyone agrees that Kurzweil is impressive. He began inventing things as a teenager and in the following decades, he came up with several breakthrough inventions, including the first flatbed scanner, the first scanner that converted text to speech (allowing the blind to read standard texts), the well-known Kurzweil music synthesizer (the first true electric piano), and the first commercially marketed large-vocabulary speech recognition. Hes the author of five national bestselling books. Hes well-known for his bold predictions and has a pretty good record of having them come trueincluding his prediction in the late 80s, a time when the internet was an obscure thing, that by the early 2000s, it would become a global phenomenon. Kurzweil has been called a restless genius by The Wall Street Journal, the ultimate thinking machine by Forbes, Edisons rightful heir by Inc. Magazine, and the best person I know at predicting the future of artificial intelligence by Bill Gates.9 In 2012, Google co-founder Larry Page approached Kurzweil and asked him to be Googles Director of Engineering.5 In 2011, he co-founded Singularity University, which is hosted by NASA and sponsored partially by Google. Not bad for one life.
This biography is important. When Kurzweil articulates his vision of the future, he sounds fully like a crackpot, and the crazy thing is that hes nothes an extremely smart, knowledgeable, relevant man in the world. You may think hes wrong about the future, but hes not a fool. Knowing hes such a legit dude makes me happy, because as Ive learned about his predictions for the future, I badly want him to be right. And you do too. As you hear Kurzweils predictions, many shared by other Confident Corner thinkers like Peter Diamandis and Ben Goertzel, its not hard to see why he has such a large, passionate followingknown as the singularitarians. Heres what he thinks is going to happen:
Timeline
Kurzweil believes computers will reach AGI by 2029 and that by 2045, well have not only ASI, but a full-blown new worlda time he calls the singularity. His AI-related timeline used to be seen as outrageously overzealous, and it still is by many,6 but in the last 15 years, the rapid advances of ANI systems have brought the larger world of AI experts much closer to Kurzweils timeline. His predictions are still a bit more ambitious than the median respondent on Mller and Bostroms survey (AGI by 2040, ASI by 2060), but not by that much.
Kurzweils depiction of the 2045 singularity is brought about by three simultaneous revolutions in biotechnology, nanotechnology, and, most powerfully, AI.
Before we move onnanotechnology comes up in almost everything you read about the future of AI, so come into this blue box for a minute so we can discuss it
Nanotechnology Blue Box
Nanotechnology is our word for technology that deals with the manipulation of matter thats between 1 and 100 nanometers in size. A nanometer is a billionth of a meter, or a millionth of a millimeter, and this 1-100 range encompasses viruses (100 nm across), DNA (10 nm wide), and things as small as large molecules like hemoglobin (5 nm) and medium molecules like glucose (1 nm). If/when we conquer nanotechnology, the next step will be the ability to manipulate individual atoms, which are only one order of magnitude smaller (~.1 nm).7
To understand the challenge of humans trying to manipulate matter in that range, lets take the same thing on a larger scale. The International Space Station is 268 mi (431 km) above the Earth. If humans were giants so large their heads reached up to the ISS, theyd be about 250,000 times bigger than they are now. If you make the 1nm 100nm nanotech range 250,000 times bigger, you get .25mm 2.5cm. So nanotechnology is the equivalent of a human giant as tall as the ISS figuring out how to carefully build intricate objects using materials between the size of a grain of sand and an eyeball. To reach the next levelmanipulating individual atomsthe giant would have to carefully position objects that are 1/40th of a millimeterso small normal-size humans would need a microscope to see them.8
Nanotech was first discussed by Richard Feynman in a 1959 talk, when he explained: The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. It would be, in principle, possible for a physicist to synthesize any chemical substance that the chemist writes down. How? Put the atoms down where the chemist says, and so you make the substance. Its as simple as that. If you can figure out how to move individual molecules or atoms around, you can make literally anything.
Nanotech became a serious field for the first time in 1986, when engineer Eric Drexler provided its foundations in his seminal book Engines of Creation, but Drexler suggests that those looking to learn about the most modern ideas in nanotechnology would be best off reading his 2013 book, Radical Abundance.
Gray Goo Bluer Box
Were now in a diversion in a diversion. This is very fun.9
Anyway, I brought you here because theres this really unfunny part of nanotechnology lore I need to tell you about. In older versions of nanotech theory, a proposed method of nanoassembly involved the creation of trillions of tiny nanobots that would work in conjunction to build something. One way to create trillions of nanobots would be to make one that could self-replicate and then let the reproduction process turn that one into two, those two then turn into four, four into eight, and in about a day, thered be a few trillion of them ready to go. Thats the power of exponential growth. Clever, right?
Its clever until it causes the grand and complete Earthwide apocalypse by accident. The issue is that the same power of exponential growth that makes it super convenient to quickly create a trillion nanobots makes self-replication a terrifying prospect. Because what if the system glitches, and instead of stopping replication once the total hits a few trillion as expected, they just keep replicating? The nanobots would be designed to consume any carbon-based material in order to feed the replication process, and unpleasantly, all life is carbon-based. The Earths biomass contains about 1045 carbon atoms. A nanobot would consist of about 106 carbon atoms, so 1039 nanobots would consume all life on Earth, which would happen in 130 replications (2130 is about 1039), as oceans of nanobots (thats the gray goo) rolled around the planet. Scientists think a nanobot could replicate in about 100 seconds, meaning this simple mistake would inconveniently end all life on Earth in 3.5 hours.
Follow this link:
The Artificial Intelligence Revolution: Part 2 - Wait But Why
Posted in Superintelligence
Comments Off on The Artificial Intelligence Revolution: Part 2 – Wait But Why
Superintelligence: Paths, Dangers, Strategies: Amazon.co.uk …
Posted: at 12:04 pm
Review
I highly recommend this book (Bill Gates)
Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era (Stuart Russell, Professor of Computer Science, University of California, Berkley)
Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book (Martin Rees, Past President, Royal Society)
This superb analysis by one of the worlds clearest thinkers tackles one of humanitys greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesnt become the last? (Max Tegmark, Professor of Physics, MIT)
Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever (Olle Haggstrom, Professor of Mathematical Statistics)
Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking (The Economist)
There is no doubting the force of [Bostroms] arguments the problem is a research challenge worthy of the next generations best mathematical talent. Human civilisation is at stake (Financial Times)
His book Superintelligence: Paths, Dangers, Strategies became an improbable bestseller in 2014 (Alex Massie, Times (Scotland))
Ein Text so nchtern und cool, so angstfrei und dadurch umso erregender, dass danach das, was bisher vor allem Filme durchgespielt haben, auf einmal hchst plausibel erscheint. A text so sober and cool, so fearless and thus all the more exciting that what has until now mostly been acted through in films, all of a sudden appears most plausible afterwards. (translated from German) (Georg Diez, DER SPIEGEL)
Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes (Elon Musk, Founder of SpaceX and Tesla)
A damn hard read (Sunday Telegraph)
I recommend Superintelligence by Nick Bostrom as an excellent book on this topic (Jolyon Brown, Linux Format)
Every intelligent person should read it. (Nils Nilsson, Artificial Intelligence Pioneer, Stanford University)
Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009). He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.
Read more from the original source:
Superintelligence: Paths, Dangers, Strategies: Amazon.co.uk ...
Posted in Superintelligence
Comments Off on Superintelligence: Paths, Dangers, Strategies: Amazon.co.uk …