The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Superintelligence
Being human in the age of artificial intelligence – Science Weekly podcast – The Guardian
Posted: August 25, 2017 at 4:19 am
Subscribe & Review on iTunes, Soundcloud, Audioboom, Mixcloud & Acast, and join the discussion on Facebook and Twitter
In 2014, a new research and outreach organisation was born in Boston. Calling itself The Future of Life Institute, its founders included Jaan Tallinn - who helped create Skype - and a physicist from Massachusetts Institute of Technology. That physicist was Professor Max Tegmark.
With a mission to help safeguard life and develop optimistic visions of the future, the Institute has focused largely on Artificial Intelligence (AI). Of particular concern is the potential for AI to leapfrog humans and achieve so-called superintelligence something discussed in depth in Tegmarks latest book Life 3.0. This week Ian Sample asks the physicist and author what would happen if we did manage to create superintelligent AI? Do we even know how to build human-level AI? And with no sign of computers outsmarting us yet, why talk about it now?
Read more:
Being human in the age of artificial intelligence - Science Weekly podcast - The Guardian
Posted in Superintelligence
Comments Off on Being human in the age of artificial intelligence – Science Weekly podcast – The Guardian
Sorry Elon Musk, the machines will not win – Irish Times
Posted: August 18, 2017 at 5:29 am
Elon Musk: Claims of a pending AI apocalypse come almost exclusively from the ranks of individuals such as Musk, Hawking, and Bostrom who possess no formal training in the field, says Ryan Calo of the University of Washington.
Last Saturday tech billionaire and SpaceX chief executive Elon Musk tweeted a warning that there is vastly more risk associated with artificial intelligence than with North Korea along with an image bearing the words In the end the machines will win. Musk and others in the public eye including theoretical physicist Stephen Hawking and philosopher Nick Bostrum have been vocal about AI risks many times in the past, but cyber and robotics law expert Ryan Calo, faculty co-director at the University of Washingtons Tech Policy Lab, begs to differ.
In an essay on AI policy, he points out that nothing in the current literature suggests that AI could model the intelligence of a lower mammal in full, let alone human intelligence . . . This explains why claims of a pending AI apocalypse come almost exclusively from the ranks of individuals such as Musk, Hawking and Bostrom who possess no formal training in the field. Ouch.
Finally, Calo says that even if AI was to reach the level of superintelligence (smarter than humans), there is nothing to suggest that it would then focus on world domination.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3015350
Follow this link:
Posted in Superintelligence
Comments Off on Sorry Elon Musk, the machines will not win – Irish Times
The Musk/Zuckerberg Dustup Represents a Growing Schism in AI – Motherboard
Posted: August 3, 2017 at 10:30 am
Frank White is the author of The Overview Effect: Space Exploration and Human Evolution. He is working on a book about artificial intelligence.
Recently, two tech heavyweights stepped into the social media ring and threw a couple of haymakers at one another. The topic: artificial intelligence (AI) and whether it is a boon to humanity or an existential threat.
Elon Musk, founder and CEO of SpaceX and Tesla, has been warning of the dangers posed by AI for some time and called for its regulation at a conference of state governors in July. In the past, he has likened AI to "summoning the demon," and he founded an organization called OpenAI to mitigate the risks posed by artificial intelligence.
Facebook founder Mark Zuckerberg took a moment while sitting in his backyard and roasting some meat to broadcast a Facebook Live message expressing his support for artificial intelligence, suggesting that those urging caution were "irresponsible."
Musk then tweeted that Zuckerberg's understanding of AI was "limited."
The Musk/Zuckerberg tiff points to something far more important than a disagreement between two young billionaires. There are two distinct perspectives on AI emerging, represented by Musk and Zuckerberg, but the discussion is by no means limited to them.
This debate has been brewing under the surface for some time, but has not received the attention it deserves. AI is making rapid strides and its advent raises a number of significant public policy questions, such as whether developments in this field should be evaluated in regard to their impact on society, and perhaps regulated. It will doubtless have a tremendous impact on the workplace, for example. Let's examine the underlying issues and how we might address them.
Perhaps the easiest way to sort out this debate is to consider, broadly, the positive and negative scenarios for AI in terms of its impact on humankind.
The AI pessimists and optimists seem locked into their worldviews, with little overlap between their projected futures
The negative scenario, which has been personified by Musk, goes something like this: What we have today is specialized AI, which can accomplish specific tasks as well as, if not better than, humans. This is not a matter of concern in and of itself. However, some believe it will likely lead to artificial general intelligence (AGI) that is not only equal to human intelligence but also able to master any discipline, from picking stocks to diagnosing diseases. This is uncharted territory, but the concern is that AGI will almost inevitably lead to Superintelligence, a system that will outperform humans in every domain and perhaps even have its own goals, over which we will have no control.
At that point, known as the Singularity, we will no longer be the most intelligent species on the planet and no longer masters of our own fate.
In the scariest visions of the post-Singularity future, the hypothesized Superintelligence may decide that humans are a threat and wipe us out. More hopeful, but still disturbing views, such as that of Apple co-founder Steve Wozniak, suggest that we humans will eventually become the "pets" of robots.
The positive scenario, recently associated with Zuckerberg, goes in a different direction: It emphasizes more strongly that specialized AI is already benefiting humanity, and we can expect more of the same. For example, AIs are being applied to diagnosing diseases and they are often doing a better job than human doctors. Why, ask the optimists, do we care who does the work, if it benefits patients? Then we have mainstream applications of AI assistants like Siri and Alexa, which (or who?) are helping people manage their lives and learn more about the world just by asking.
Read More: Google's DeepMind Is Teaching AI How to Think Like a Human
Optimistic observers believe that AGI will be difficult to achieveit won't happen overnightand we can build in plenty of safeguards before it emerges. Others suggest that AGI and anything beyond it is a myth.
If we can achieve AGI, the optimistic view is that we will build on previous successes and deploy technologies like driverless cars, which will save thousands of human lives every year. As for the Singularity and Superintelligence, advocates of the positive scenario see these developments as more an article of faith than a scientific reality. And again, we have plenty of time to prepare for these eventualities.
The AI pessimists and optimists may seem locked into their own worldviews, with little apparent overlap between their projected futures. This leaves us with tweetstorms and Facebook Live jabs rather than a collaborative effort to manage a powerful technology.
However, there is one topic on which both sides tend to agree: AI is already having, and will continue to have, tremendous impact on jobs.
Speaking recently at a Harvard Business School event, Andrew Ng, the cofounder of Coursera and former chief scientist at Baidu, said that based on his experience as an "AI insider," he did not "see a clear path for AI to surpass human-level intelligence."
On the other hand, he asserted that job displacement was a huge problem, and "the one that I wish we could focus on, rather than be distracted by these science fiction-ish, dystopian elements."
Ng seems to confirm the optimistic view that Superintelligence is unlikely, and therefore the thrust of his comments center on the future of work and whether we are adequately prepared. Looking at just one sector of the economy, transport, it isn't hard to see that he has a point. If driverless cars and trucks do become the norm, thousands if not millions of people who drive for a living will be out of work. What will they do?
As the Musk/Zuckerberg argument unfolds, let's hope it sheds light on a significant challenge that has gone unnoticed for far too long. Forging a public policy response represents an opportunity for the optimists and pessimists to collaborate rather than debate.
Get six of our favorite Motherboard stories every day by signing up for our newsletter.
Read this article:
The Musk/Zuckerberg Dustup Represents a Growing Schism in AI - Motherboard
Posted in Superintelligence
Comments Off on The Musk/Zuckerberg Dustup Represents a Growing Schism in AI – Motherboard
The end of humanity as we know it is ‘coming in 2045’ and Google is preparing for it – Metro
Posted: July 28, 2017 at 7:27 pm
Robots will reach human intelligence by 2029 and life as we know it will end in 2045.
This isnt the prediction of a conspiracy theorist, a blind dead woman or an octopus but of Googles chief of engineering, Ray Kurzweil.
Kurzweil has said that the work happening now will change the nature of humanity itself.
Tech company Softbanks CEO Masayoshi Son predicts it will happen in 2047.
And its all down to the many complexities of artificial intelligence (AI).
AI is currently limited to Siri or Alexa-like voice assistants that learn from humans, Amazons things you might also like, machines like Deep Blue, which has beaten grandmasters at chess, and a few other examples.
But the Turing test, where a machine exhibits intelligence indistinguishable from a human, has still not been fully passed.
Not yet at least
What we have at the moment is known as narrow AI, intelligent at doing one thing or a narrow selection of tasks very well.
General AI, where humans and robots are comparable, is expected to show breakthroughs over the next decade.
They become adaptable and able to turn their hand to a wider variety of tasks, in the same way as humans have areas of strength but can accomplish many things outside those areas.
This is when the Turing Test will truly be passed.
The third step is ASI, artificial super-intelligence.
ASI is the thing that the movies are so obsessed with, where machines are more intelligent and stronger than humans. It always felt like a distant dream but predictions are that its getting closer.
People will be able to upload their consciousness into a machine, it is said, by 2029 when the machine will be as powerful as the human brain and ASI or the singularity will happen, Google predicts, in 2045.
There are many different theories about what this could mean, some more scary than others.
The technological singularity, as it called, is the moment when artificial intelligence takes off into artificial superintelligence and becomes exponentially more intelligent more quickly.
As self-improvement becomes more efficient, it would get quicker and quicker at improvement until the machine became infinitely more intelligent infinitely quickly.
In essence, the conclusion of the extreme end of this theory has a machine with God-like abilities recreating itself infinitely more powerfully an infinite number of times in less than a blink of eye.
We project our own humanist delusions on what life might be life [when artificial intelligence reaches maturity], philosopher Slavoj iek says.
The very basics of what a human being will be will change.
But technology never stands on its own. Its always in a set of relations and part of society.
Society, however that develops, will need to catch up with technology. If it doesnt, then there is a risk that technology will overtake it and make human society irrelevant at best and extinct at worst.
One of the theories asserts that once we upload our consciousness into a machine, we become immortal and remove the need to have a physical body.
Another has us as not being able to keep up with truly artificial intelligence so humanity is left behind as infinitely intelligent AI explores the earth and/or the universe without us.
The third, and perhaps the scariest, is the sci-fi one where, once machines become aware of humanitys predilection to destroy anything it is scared of, AI acts first to preserve itself at the expense of humans so humanity is wiped out.
All this conjures up images of Blade Runner, of iRobot and all sorts of Terminator-like dystopian nightmares.
In my lifetime, the singularity will happen, Alison Lowndes, head of AI developer relations at technology company Nvidia, tells Metro.co.uk at the AI Summit.
But why does everyone think theyd be hostile?
Thats our brain assuming its evil. Why on earth would it need to be? People are just terrified of change.
These people still struggle with the idea that your fridge might know what it contains.
Self-driving cars, which will teach themselves the nuances of each road, still frighten a lot of people.
And this is still just narrow AI.
Letting a car drive for us is one thing, letting a machine think for us is quite another.
The pace of innovation and the pace of impact on the population is getting quicker, Letitia Cailleteau, global head of AI at strategists Accenture, tells Metro.co.uk.
If you take cars, for example, it took around 50 years to get 50 million cars on the road.
If you look at the latest innovations, it only takes a couple of years like Facebook to have the same impact.
The pace of innovation is quicker. AI will innovate quickly, even though it is to predict how quickly that will be.
But, as with all doomsday predictions, there is a lot of uncertainty. It turns out predicting the future is hard:
Computer scientists, philosophers and journalists have never been shy to offer their own definite prognostics, claiming AI to be impossible or just around the corner or anything in between, the Machine Intelligence Research Institute wrote.
Steve Pinker, a cognitive scientist at Harvard puts it more simply:
The increase in understanding of the brain or evolutionary genetics has followed nothing like [the pace of technological innovation], Pinker has said.
I dont see any sign that well attain it.
Yet there already those who think were already part of the way there.
5 of the best apps to help you save - even if you're crap with money
Your old video games could be worth a lot of money
Has the Sarahah app become a tool for cyberbullying?
Nobody panic, but Facebook just went down in the UK
Were already in a state of transhumanism, author and journalist Will Self says.
Technology happens to humans rather than humans playing in a part of it.
The body can already be augmented with machinery, either internally or externally, and microchips have been inserted into a workforce.
On a more everyday level, when you see people just staring at their phones, are we really that far away from a point when humans and machines are one and the same?
MORE: Star Wars books could provide clues about The Last Jedi
MORE: 40% of jobs taken by robots by 2030 but AI companies say theyre here to help
More here:
The end of humanity as we know it is 'coming in 2045' and Google is preparing for it - Metro
Posted in Superintelligence
Comments Off on The end of humanity as we know it is ‘coming in 2045’ and Google is preparing for it – Metro
Elon Musk dismisses Mark Zuckerberg’s understanding of AI threat as ‘limited’ – The Verge
Posted: July 26, 2017 at 4:29 pm
The war between AI and humanity may be a long way off, but the war between tech billionaire and tech billionaire is only just beginning. Today on Twitter, Elon Musk dismissed Mark Zuckerbergs understanding of the threat posed by artificial intelligence as limited, after the Facebook founder disparaged comments Musk made on the subject earlier this month.
The beef (such as it is) goes back to a speech the SpaceX and Tesla CEO made to an assembly of US governors. Musk warned that there needed to be regulation on AI development before its too late. I keep sounding the alarm bell, but until people see robots going down the street killing people, they dont know how to react, because it seems so ethereal, he said, adding that the technology represents a fundamental risk to the existence of civilization.
Are both Musk and Zuckerberg missing the point?
Its a familiar refrain from Musk, and one that doesnt hold much water within the AI community. Pedro Domingos, a machine learning researcher and author of The Master Algorithm, summed up the feelings of many with a one word response on Twitter: Sigh. Later, Domingos expanded on this in an interview with Wired, saying: Many of us have tried to educate [Musk] and others like him about real vs. imaginary dangers of AI, but apparently none of it has made a dent.
Fast-forward to this Sunday, when Zuckerberg was running one of his totally-normal-and-not-running-for-political-office Facebook Live Q&As. At around 50 minutes in, a viewer asks Zuckerberg: I watched a recent interview with Elon Musk and his largest fear for the future was AI. What are your thoughts on AI and how it could affect the world?
Zuck responds: I have pretty strong opinions on this ... I think you can build things and the world gets better, and with AI especially, Im really optimistic. I think people who are naysayers and try to drum up these doomsday scenarios are I just, I don't understand it. It's really negative and in some ways I think it is pretty irresponsible.
He goes on to predict that in the next five to 10 years AI will deliver so many improvements in the quality of our lives, and cites health care and self-driving cars as two major examples. People who are arguing for slowing down the process of building AI, I find that really questionable, Zuckerberg concludes. If youre arguing against AI youre arguing against safer cars that arent going to have accidents.
Someone then posted a write-up of Zuckerbergs Q&A on Twitter and tagged Musk, who jumped into the conversation with the comment below. Musk also linked approvingly to an article on the threat of superintelligent AI by Tim Urban. (The article covers much of the same ground as Nick Bostroms influential book Superintelligence: Paths, Dangers, Strategies. Both discuss a number of ways contemporary AI could develop into super-intelligence, including through exponential growth in computing power something Musk later tweeted about.)
But as fun as it is to watch two extremely rich people, who probably wield more influence over your life than most politicians, trade barbs online, its hard to shake the feeling that both Musk and Zuckerberg are missing the point.
While AI researchers dismiss Musks comments on AI as alarmist, thats only in reference to the imagined threat of some Skynet-style doomsday machine. The same experts frequently point out that artificial intelligence poses many genuine threats that already affect us today. These include how the technology can amplify racist and sexist prejudices; how it could upend society by putting millions out of jobs; how it is set to increase inequality; and how it will be used as tool of control by authoritarian governments.
These are real dangers that need real solutions, not just sci-fi speculation.
And while Zuckerbergs comments on the potential benefits of AI in health care and road safety are heartening, focusing only on the good that artificial intelligence can deliver is in its own way as limited as focusing only on the threat. Really, we need to combine both Musk and Zuckerbergs approaches, and probably listen less to tech billionaires in the process.
See the article here:
Elon Musk dismisses Mark Zuckerberg's understanding of AI threat as 'limited' - The Verge
Posted in Superintelligence
Comments Off on Elon Musk dismisses Mark Zuckerberg’s understanding of AI threat as ‘limited’ – The Verge
"Goodbye, Dave" –Scientists Ponder How to Identify Conscious Future AI’s on Earth – The Daily Galaxy (blog)
Posted: July 23, 2017 at 1:21 am
The question is not so far-fetched. Robots are currently being developed to work inside nuclear reactors, fight wars and care for the elderly. As AIs grow more sophisticated, they are projected to take over many human jobs within the next few decades. So we must ponder the question: Could AIs develop conscious experience?
This issue is pressing for several reasons. First, ethicists worry that it would be wrong to force AIs to serve us if they can suffer and feel a range of emotions. Second, consciousness could make AIs volatile or unpredictable, raising safety concerns (or conversely, it could increase an AIs empathy; based on its own subjective experiences, it might recognize consciousness in us and treat us with compassion).
Third, machine consciousness could impact the viability of brain-implant technologies, like those to be developed by Elon Musks new company, Neuralink. If AI cannot be conscious, then the parts of the brain responsible for consciousness could not be replaced with chips without causing a loss of consciousness. And, in a similar vein, a person couldnt upload their brain to a computer to avoid death, because that upload wouldnt be a conscious being.
In addition, if AI eventually out-thinks us yet lacks consciousness, there would still be an important sense in which we humans are superior to machines; it feels like something to be us. But the smartest beings on the planet wouldnt be conscious or sentient.
A lot hangs on the issue of machine consciousness, then. Yet neuroscientists are far from understanding the basis of consciousness in the brain, and philosophers are at least equally far from a complete explanation of the nature of consciousness.
A test for machine consciousness
So what can be done? We believe that we do not need to define consciousness formally, understand its philosophical nature or know its neural basis to recognize indications of consciousness in AIs. Each of us can grasp something essential about consciousness, just by introspecting; we can all experience what it feels like, from the inside, to exist.
Based on this essential characteristic of consciousness, we propose a test for machine consciousness, the AI Consciousness Test (ACT), which looks at whether the synthetic minds we create have an experience-based understanding of the way it feels, from the inside, to be conscious.
One of the most compelling indications that normally functioning humans experience consciousness, although this is not often noted, is that nearly every adult can quickly and readily grasp concepts based on this quality of felt consciousness. Such ideas include scenarios like minds switching bodies (as in the film Freaky Friday); life after death (including reincarnation); and minds leaving their bodies (for example, astral projection or ghosts). Whether or not such scenarios have any reality, they would be exceedingly difficult to comprehend for an entity that had no conscious experience whatsoever. It would be like expecting someone who is completely deaf from birth to appreciate a Bach concerto.
Thus, the ACT would challenge an AI with a series of increasingly demanding natural language interactions to see how quickly and readily it can grasp and use concepts and scenarios based on the internal experiences we associate with consciousness. At the most elementary level we might simply ask the machine if it conceives of itself as anything other than its physical self.
At a more advanced level, we might see how it deals with ideas and scenarios such as those mentioned in the previous paragraph. At an advanced level, its ability to reason about and discuss philosophical questions such as the hard problem of consciousness would be evaluated. At the most demanding level, we might see if the machine invents and uses such a consciousness-based concept on its own, without relying on human ideas and inputs.
Consider this example, which illustrates the idea: Suppose we find a planet that has a highly sophisticated silicon-based life form (call them Zetas). Scientists observe them and ponder whether they are conscious beings. What would be convincing proof of consciousness in this species? If the Zetas express curiosity about whether there is an afterlife or ponder whether they are more than just their physical bodies, it would be reasonable to judge them conscious. If the Zetas went so far as to pose philosophical questions about consciousness, the case would be stronger still.
There are also nonverbal behaviors that could indicate Zeta consciousness such as mourning the dead, religious activities or even turning colors in situations that correlate with emotional challenges, as chromatophores do on Earth. Such behaviors could indicate that it feels like something to be a Zeta.
The death of the mind of the fictional HAL 9000 AI computer in Stanley Kubricks 2001: A Space Odyssey provides another illustrative example. The machine in this case is not a humanoid robot as in most science fiction depictions of conscious machines; it neither looks nor sounds like a human being (a human did supply HALs voice, but in an eerily flat way). Nevertheless, the content of what it says as it is deactivated by an astronaut specifically, a plea to spare it from impending death conveys a powerful impression that it is a conscious being with a subjective experience of what is happening to it.
Could such indicators serve to identify conscious AIs on Earth? Here, a potential problem arises. Even todays robots can be programmed to make convincing utterances about consciousness, and a truly superintelligent machine could perhaps even use information about neurophysiology to infer the presence of consciousness in humans. If sophisticated but non-conscious AIs aim to mislead us into believing that they are conscious for some reason, their knowledge of human consciousness could help them do so.
We can get around this though. One proposed technique in AI safety involves boxing in an AImaking it unable to get information about the world or act outside of a circumscribed domain, that is, the box. We could deny the AI access to the internet and indeed prohibit it from gaining any knowledge of the world, especially information about conscious experience and neuroscience.
We doubt a superintelligent machine could be boxed in effectively it would find a clever escape. We do not anticipate the development of superintelligence over the next decade, however. Furthermore, for an ACT to be effective, the AI need not stay in the box for long, just long enough administer the test.
ACTs also could be useful for consciousness engineering during the development of different kinds of AIs, helping to avoid using conscious machines in unethical ways or to create synthetic consciousness when appropriate.
Beyond the Turing Test
An ACT resembles Alan Turings celebrated test for intelligence, because it is entirely based on behavior and, like Turings, it could be implemented in a formalized question-and-answer format. (An ACT could also be based on an AIs behavior or on that of a group of AIs.)
But an ACT is also quite unlike the Turing test, which was intended to bypass any need to know what was transpiring inside the machine. By contrast, an ACT is intended to do exactly the opposite; it seeks to reveal a subtle and elusive property of the machines mind. Indeed, a machine might fail the Turing test because it cannot pass for human, but pass an ACT because it exhibits behavioral indicators of consciousness.
This is the underlying basis of our ACT proposal. It should be said, however, that the applicability of an ACT is inherently limited. An AI could lack the linguistic or conceptual ability to pass the test, like a nonhuman animal or an infant, yet still be capable of experience. So passing an ACT is sufficient but not necessary evidence for AI consciousness although it is the best we can do for now. It is a first step toward making machine consciousness accessible to objective investigations.
So, back to the superintelligent AI in the box we watch and wait. Does it begin to philosophize about minds existing in addition to bodies, like Descartes? Does it dream, as in Isaac Asimovs Robot Dreams? Does it express emotion, like Rachel in Blade Runner? Can it readily understand the human concepts that are grounded in our internal conscious experiences, such as those of the soul or atman?
The age of AI will be a time of soul-searching both of ours, and for theirs.
Susan Schneider, PhD, is a professor of philosophy and cognitive science at the University of Connecticut, a researcher at YHouse, Inc., in New York, a member of the Ethics and Technology Group at Yale University and a visiting member at the Institute for Advanced Study at Princeton. Her books include The Language of Thought, Science Fiction and Philosophy, and The Blackwell Companion to Consciousness (with Max Velmans). She is featured in the new film, Supersapiens, the Rise of the Mind.
Edwin L. Turner, PhD, is a professor of Astrophysical Sciences at Princeton University, an Affiliate Scientist at the Kavli Institute for the Physics and Mathematics of the Universe at the University of Tokyo, a visiting member in the Program in Interdisciplinary Studies at the Institute for Advanced Study in Princeton, and a co-founding Board of Directors member of YHouse, Inc. Recently he has been an active participant in the Breakthrough Starshot Initiative. He has taken an active interest in artificial intelligence issues since working in the AI Lab at MIT in the early 1970s.
By Susan Schneider, PhD, and Edwin Turner, PhD Originally published in Scientific American, July 19, 2017
Visit link:
Posted in Superintelligence
Comments Off on "Goodbye, Dave" –Scientists Ponder How to Identify Conscious Future AI’s on Earth – The Daily Galaxy (blog)
Giving Up the Fags: A Self-Reflexive Speech on Critical Auto-ethnography About the Shame of Growing up Gay/Sexual … – The Good Men Project (blog)
Posted: July 22, 2017 at 8:23 am
Editors note: In British English, the word fag means cigarette.
I am studying my Ph.D. in the College of Education at Victoria University Melbourne. Ill introduce from a quote from Springaay, from the handbook, Being with A/r/tography, (my methodology), where she writes there is no need to separate the personal from the professional any more than we can separate the dancer from the dance (Springgay, pp5).
Hopefully, by the end of this essay, you will see why that is important to me in terms of critical auto-ethnographical and autobiographical, practice-led writing and research.
I am working with young people for my Ph.D. Specifically, year eleven students. What will it mean to be human through the lens of technology in the near future? is the broad central theme. I am writing a six-week curriculum exploring artificial intelligence, and the anticipated superintelligence that will further enable transhumanism. What do young people ethically think of living in a post-human world?
But that is not what this essay is about.
In my youth and adolescence, I felt I had no non-prejudiced person to validate my emotional or ethical life. As a now forty-four-year-old adult, I want to be that person for these kids, allowing them to voice their concerns and for them to be heard.
My intuition that led me to want to work with young people is multifaceted and, as it turns out, complex. In the first instance, I have worked with young people before discussing mental health issues (as per my lived experience of schizophrenia), and drug use and abuse in many pedagogical settings in the past. I have valued and enjoyed hearing young peoples candidness. I have no children of my own.
I was exposed to things of a sexual nature from two abusive peers that I need not have seen.
For my presentation, I would like to read an abridged and sometimes for me emotional introduction to my exegesis. Through auto-ethnographical and autobiographical, practice-led writing, it has led to some intensely personal and stunning revelations. I feel this adds to my justifications of working with young people and needed to be addressed before my research commenced.
Just before I start this narrative piece I would like to quote Jones, Autoethnography uses the researchers personal experiences as primary data.
Just before Christmas in 2016, I gave up smoking. This was for health reasons, as I was getting unfit and short of breath. Another reason was to avoid feeling ostracized with the proliferation of non-smoking zones. Being ostracized is also a feeling I have felt throughout my life.It was also to save money and have literally have enough prosperity so that I could put a roof over my head to finish this Ph.D.
I only expected to give up smoking. What happened next was totally unexpected. It is a bit like the outcome of this novella I am writing for my Ph.D., for the result is beyond an event horizon in which no one knows the outcome.
The occurrence of giving up smoking, however, wove itself into this Ph.D. narrative and is a vehicle by which I can place my more self-actualised identity within the framework of my study.
It also goes part way to justify why it is that I want to work with young people, apart from the fact they are familiar with technology and will inherit this fast changing technological world.
As a young queer person with a mental illness, I did not think I ever received much validation. I did not have the capacity nor the opportunity to express myself in many ways, and with the onset of depression, addiction and psychosis, that coupled itself with isolation and ostracisation, I did not ever have the opportunity to.
This being said I had wonderful parents in many ways growing up and other well-meaning relatives around. However, growing up in the eighties AIDS crisis, to feel like anything other than heteronormative was difficult.The television broadcast the shock tactics of the Grim Reaper killing people with AIDS.Adults and children alike, had eyes and ears during my formative years. We had a close family, they were all wonderful but to be gay that was bad.
I recall Mum at the park when I was young, Dont go near those toilets without me, bad men go there. Mum was caring and expressing herself from a well of love and protectiveness. She was a great Mum.
With my developing self-awareness, I further want to be a non-prejudiced and open person for young people to relate to with candidness and openness.
When I gave up smoking, unconsciously I went into self-destruct mode for a while, a sort of self-medicating and hedonistic coping mechanism. After some months, it suddenly dawned on me that I had undergone inappropriate sexual abuse and sexual exposure when I was a child.
Two abusive peers exposed me to things of a sexual nature that I need not have seen. I had also been flashed and was shown an adults genitals by someone very close to my home whom I and the family trusted.
The memories started to rush in at another separate event, I cant quite remember and dont want to, an incident occurred at the toilets at little athletics when I was about eight years old. I only put weight to this sketchy memory, because even though I loved little aths and was good at it-I never went back after the incident despite my fathers pleas.
After that incident at little aths, I remember being so scared of, and avoiding the toilet so much, that I recall going home one afternoon from little aths having not urinated all day and Dad popping into the milk bar to buy the paper as he used to.
Having avoided the scene of the indecency, I could not hold on anymore, so I pissed in a McDonalds cup in the front seat of our family Volkswagon, snuck out of the car and put it in the bin before Dad came back, such was my shame.
Bad people go there. To be gay was bad. This meant that I was bad. This was ingrained from a young age.
I carried that guilt and shame for most of my childhood, all my adolescence and adult life.
I had always remembered the abuse, yet I did not ever consciously give it voice or gave it any weight. However, as I wrote more, I received counsel from my psychologist for the additional memories. For the longest timemy whole life, in factI had made decisions as an adolescent and an adult that had their genesis in the non-validation of the abuse.
As I wrote more, I received counsel from my psychologist for the additional memories. For the longest timemy whole life, in factI had made decisions as an adolescent and an adult that had their genesis in the non-validation of the abuse.
This included drug-taking and other risky behavior, constantly changing the location of where I lived, running away, squatting in disheveled housing at times, being jobless, not confident and not knowing why, financially bereft, emotionally traumatized, and overactive sexual misadventures.
It also manifested in choosing life partners and company in which I settled for, yet deserved much more. I have no doubt that my self-denial of what had happened to me added to and exacerbated my diagnosis of schizophrenia from age twenty over my lifetime.
Smoking for me was literally a smokescreen for nearly 23 years.
It was the reason not to remember, the affirmation that I as a person was not worthy. I did not care for myself. At the start, it was rebellious; it was also something I started to do when I was young that I knew I was not allowed to: that was taboo. I as a young person, had known taboo with abuse and prejudice-but the taboo of smoking was something that I myself was in control of.
This was in antithesis, of the abusive and inappropriate events that happened to me growing up; of the face of being vulnerable and exposed, and then not having the opportunity to express or validated what had happened.
Such was my lack of self-esteem, I knew it would kill me it said so on the pack! This self-depreciative beast took over my life from age thirteen.
It had become my addiction and best friend. It was a smokescreen for the memories that I had pushed deep into the wells of my sub consciousness. I remember throughout many psychoses and depressive episodes in my adolescence and adulthood, wanting and wishing I could die.
There was also a couple of brazen attempts, which thankfully did not work.
Ethnographically, on our televisions and on the news, gay people died of AIDS. Even in primary school, I had crushes on the boys and crushes on the girls. What if I was gay? Maybe I deserved to die? have another smoke!
I did not really answer that question of Was I gay? with certainty and confidence until I was twenty-five, had moved out of home, and got myself a job as an artist and illustrator for a major Melbourne newspaper. I needed a place to be safe when I finally did come out.
Smoking the fags meant:
I did not deserve to live (because it would kill me),
or be prosperous, (because it cost so much).
Then, I gave them up.
A change occurred that made me feel like I was a worthy person. I uncovered all the memories of the sexual abuse, of the complex family relationships within a complex time and how this had manifested into my adult life.
This surprising re-birth happened fast.
Giving up the fags was a journey of healing, and this short speech is a testament to that. It is the process of owning your experiences (both conscious and sub conscious) and being responsible, for your greatest happiness, and highest good.
To be a self-actualized adult you must be aware of your history, your make-up and your relationships and your memories, and be fully conscious of it yet for me, the illusion of the smoke screen of smoking kept me from this.
In essence-to validate and be reborn from a troubling past I had to confront the self within an autobiographical and autoethnographic narrative. This is the essential practice led writing that has un-blocked me from moving forward within my Ph.D. and within my personal life.
This public statement, writing and talking both frees me and also encourages my future happiness, and dare I say prosperity and security in a multitude of ways. This is the piece of writing, and the public testimony, that exalts me and sets me free. It will also make me a better teacher and more self-actualized researcher.
My psychologist wrote something down for me a couple of months which I said which he skillfully reminded me of:
31/01/2017
I deserve a future,
I deserve a life,
I am worthy.
I deserved, to be heard, and to live with wealth happiness and prosperity.
Giving up the fags was a revelation, yet late at age forty-four. However, I am sure we all know some people dont make it. But to feel self-worth and be listened to??
This is what the young people in my Ph.D. study, and young people everywhere, deserve to feel. We owe it to them as mentors, parents, and teachers.
So, I am no longer a smoker. I do still vape, though. This essay has been important to me as a public statement because I rightly and justly reclaimed my worth.
These were the words I needed to say which came from me and no one else, in order to move forward with my autobiographic writing of reflecting on being a young person, so I can be of service to my students and go on to co-contribute to produce global knowledge from local settings.
To be a self-actualized adult you must be aware of your history, your make-up and your relationships and your memories, and be fully conscious of it yet for me, the illusion of the smoke screen of smoking kept me from this.
These challengingly spoken words of intimacy and trauma had existed kicking and screaming in sub liminality right up into and strongly influencing my adult life.
This writing, my decisions, and this speech is a release, a healing, a process, a validation. Also, a manifesto of sorts for the role I will play in listening and validating young peoples concerns in terms of my Ph.D. topic.
If I could right now, Id take a drag on my vape, and Im on my way.
Thank you for reading.
ON CRITICAL AUTOETHNOGRAPHY:
To quote an early text from C. Wright Mills (1959) from Joneses Handbook of autoethnography, before the term autoethnography existed:
The sociological imagination enables us to grasp history and biography and the relations between the two in society. The challenge is to develop a methodology that allows us to examine how the private troubles of individuals are connected to public issues and to public responses to these troubles. That is its task and its promise. Individuals can understand their own experience and gauge their own fate only by locating themselves within their historical moment period, (pp. 56, slight paraphrase)1.
(Jones 1,2,3)
Jones, Stacy H.Handbook of Autoethnography. Routledge, 20160523. VitalBook file.
Furthermore,Carolyn Ellis(2004) defines autoethnography as research, writing, story, and method that connect the autobiographical and personal to the cultural, social, and political (p. xix).
Please share this article if it resonated with you. Thank you.
See more about Rich McLean at his websitewww.richmclean.com.au
__
__
This article originally appeared on LinkedIn
Photo credit: Getty Images
Originally posted here:
Posted in Superintelligence
Comments Off on Giving Up the Fags: A Self-Reflexive Speech on Critical Auto-ethnography About the Shame of Growing up Gay/Sexual … – The Good Men Project (blog)
AI researcher: Why should a superintelligence keep us around? – TNW
Posted: July 19, 2017 at 4:24 am
As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. Its perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, Matrix-like, as some sort of human battery.
And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become the destroyer of worlds, as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?
I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldnt avoid asking: As an AI expert, what do I fear about artificial intelligence?
The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in 2001: A Space Odyssey, is a good example of a system that fails because of unintended consequences. In many complex systems the RMS Titanic, NASAs space shuttle, the Chernobyl nuclear power plant engineers layer many different components together. The designers may have known well how each element worked individually, but didnt know enough about how they all worked together.
That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia a set of relatively small failures combined together to create a catastrophe.
I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.
Systems like IBMs Watson and Googles Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on Jeopardy! or dont defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.
But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that to err is human, so it is likely impossible for us to create a truly safe system.
Im not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.
Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.
Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that well find unintended consequences in simulation, which can be eliminated before they ever enter the real world.
Another possibility thats farther down the line is using evolution to influence the ethics of artificial intelligence systems. Its likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.
While neuroevolution might reduce the likelihood of unintended consequences, it doesnt prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.
Being a scientist doesnt absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.
As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we dont yet know what its capable of. But we do need to decide what the desired outcome of advanced AI is.
One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.
Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected and get surgery performed by a tireless robot with a perfectly steady hand. Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.
Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.
In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political self together with the rest of humanity may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.
There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?
If this guy comes for you, how will you convince him to let you live?
The key question in this scenario is: Why should a superintelligence keep us around?
I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankinds existence in it probably doesnt matter at all.
But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.
Fortunately, we need not justify our existence quite yet. We have some time somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldnt just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things as are saying we want to save the planet and successfully doing so.
We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we dont find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.
This article was originally published on The Conversation. Read the original article.
Science and Technology News on The Conversation
Read next: Xiaomi's tablet-sized Mi-Max 2 desperately wants to be a phone
Read more:
AI researcher: Why should a superintelligence keep us around? - TNW
Posted in Superintelligence
Comments Off on AI researcher: Why should a superintelligence keep us around? – TNW
What an artificial intelligence researcher fears about AI – Huron Daily … – Huron Daily Tribune
Posted: July 17, 2017 at 4:22 am
(The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts.)
Arend Hintze, Michigan State University
(THE CONVERSATION) As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. Its perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, Matrix-like, as some sort of human battery.
And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become the destroyer of worlds, as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?
I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldnt avoid asking: As an AI expert, what do I fear about artificial intelligence?
The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in 2001: A Space Odyssey, is a good example of a system that fails because of unintended consequences. In many complex systems the RMS Titanic, NASAs space shuttle, the Chernobyl nuclear power plant engineers layer many different components together. The designers may have known well how each element worked individually, but didnt know enough about how they all worked together.
That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia a set of relatively small failures combined together to create a catastrophe.
I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.
Systems like IBMs Watson and Googles Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on Jeopardy! or dont defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.
But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that to err is human, so it is likely impossible for us to create a truly safe system.
Im not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.
Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.
Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that well find unintended consequences in simulation, which can be eliminated before they ever enter the real world.
Another possibility thats farther down the line is using evolution to influence the ethics of artificial intelligence systems. Its likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.
While neuroevolution might reduce the likelihood of unintended consequences, it doesnt prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.
Being a scientist doesnt absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.
As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we dont yet know what its capable of. But we do need to decide what the desired outcome of advanced AI is.
One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.
Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected and get surgery performed by a tireless robot with a perfectly steady hand. Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.
Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.
In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political self together with the rest of humanity may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.
There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?
The key question in this scenario is: Why should a superintelligence keep us around?
I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankinds existence in it probably doesnt matter at all.
But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.
Fortunately, we need not justify our existence quite yet. We have some time somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldnt just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things as are saying we want to save the planet and successfully doing so.
We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we dont find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.
This article was originally published on The Conversation. Read the original article here: http://theconversation.com/what-an-artificial-intelligence-researcher-fears-about-ai-78655.
Excerpt from:
What an artificial intelligence researcher fears about AI - Huron Daily ... - Huron Daily Tribune
Posted in Superintelligence
Comments Off on What an artificial intelligence researcher fears about AI – Huron Daily … – Huron Daily Tribune
To prevent artificial intelligence from going rogue, here is what Google is doing – Financial Express
Posted: July 11, 2017 at 10:23 pm
DeepMind and Open AI propose to temper machine learning in development of AI with human mediationtrainers give feedback that is built into the motivator software in a bid to prevent the AI agent from performing an action that is possible, but isnt desirable. (Reuters)
Against the backdrop of warnings about machine superintelligence going rogue, Google is charting a two-way course to prevent this. The companys DeepMind division, in collaboration with Open AI, a research firm, has brought out a paper that talks of human-mediated machine-learning to avoid unpredictable AI behaviour when it learns on its own. Open AI and DeepMind looked at the problem posed by AI software that is guided by reinforcement learning and often doesnt do what is desired/desirable. The reinforcement method involves the AI entity figuring out a task by performing a range of actions and sticking with those that maximise a virtual reward given by another piece of software that works as a mathematical motivator based on an algorithm or a set of algorithms. But designing a mathematical motivator to preclude any action that is undesirable is quite a taskwhen DeepMind pitted two AI entities against each other in a fruit-picking game that allowed them to stun the opponent to pick more fruit for rewards, the entities got increasingly aggressive.
Similarly, Open AIs reinforcement learning agent started going around in circles in a digital boat-racing game to maximise points rather than complete the course. DeepMind and Open AI propose to temper machine learning in development of AI with human mediationtrainers give feedback that is built into the motivator software in a bid to prevent the AI agent from performing an action that is possible, but isnt desirable.
Also Watch:
At the same time, Google has been working on its PAIRPeople plus AI Researchproject that focuses on AI for human use rather than development of AI for AIs sake. This, however, should present a dilemmadeveloping AI for greater and deeper use for humans would mean, at some level, letting AI get smarter as well as intuitive, simulating human intelligence minus its fallibilities. But preventing it from going rogue, as the DeepMind-Open AI paper shows, would mean reining in AIat least, in the short runfrom exploring the full spectrum or intelligent and autonomous functioning.
Read the original here:
Posted in Superintelligence
Comments Off on To prevent artificial intelligence from going rogue, here is what Google is doing – Financial Express