The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Artificial Intelligence
Artificial intelligence is going to change every aspect of …
Posted: July 15, 2017 at 11:13 pm
AP Photo/Laurent Cipriani
You've probably heard of artificial intelligence by now. It's the technology powering Siri, driverless cars, and that creepy Facebook feature that automatically tags your friends when you upload photos.
But what is AI, why are people talking about it now, and what will it mean for your everyday life? SunTrust recently released a report that breaks down how AI works, so let's tackle those questions one at a time.
What is AI?
AIis what people call computer programs that try to replicate how the human brain operates. For now, they only can replicate very specific tasks. One system can beat humans at the complicated and ancient board game called Go, for example. Lots of these AI systems are being developed, each really good at a specific task.
These AI systems all operate in basically the same way. Imagine a system that tries to identify whether a photo has a cat in it. For a human, this is fairly easy, but a computer has a hard timefiguring it out. AI systems are unique because they are set up like human brains. You feed a cat photo in one end, and it bounces around a lot of different checkpoints until it comes out the other end with a yes or no answer just like your eyes passing your view of a cat through all the neurons in your brain. AI is even talked about in terms of neurons and synapses, just like the human brain.
AI systems have to be trained, which is a process of adjusting these checkpoints to achieve better results. If one checkpoint determineswhether there is hair in the photo, training an AI system is like deciding how important the presence of hair in a photo is to decide whether there is a cat in the photo.
This training process takes a huge amount of computer process to fine tune. The better a system is trained, the better results you can get from it and the better your cat photo system will be able to determine whether there is a cat in a photo you show it. The huge amount of processing power required to run and train AI systems is what has kept AI research relatively quiet until recently, which leads to the next question.
DeepMind
Why are people talking about AI all the time now?
There is a famous AI contest where researchers pit computers against humans in a challenge to correctly identify photos. Humans usually are able to identify photos with about 95% accuracy in this content, and in 2012, computers were able to identify about 74% of photos correctly, according to SunTrust's report. In 2015, computers reached 96% accuracy, officially beating humans for the first time. This was called the "big bang" of AI, according to SunTrust.
The big bang of AI was made possible by some fancy new algorithms, three specifically. These new algorithms were better ways of training AI systems, making them faster and cheaper to run.
AI systems require lots of real-world examples to be trained well, like lots of cat photos for example. These cat photos also had to be labeled as cat photos so the system knew when it got the right answer from its algorithms and checkpoints. The new algorithms that led to the big bang allowed AI systems to be trained with fewer examples that didn't have to be labeled as well as before. Collecting enough examples to train an AI system used to be really expensive, but was much cheaper after the big bang. Advances in processing power and cheap storage also helped move things along.
Since the big bang, there have been a number of huge strides in AI technology. Tesla, Google, Apple and many of the traditional car companies are training AI systems for autonomous driving.Google, Apple and Amazon are pioneering the first smart personal assistants. Some companies are even working on AI driven healthcare solutions that could personalize treatment plans based on a patient's history, according to SunTrust.
What will AI mean for your life?
AI technology could be as simple as making your email smarter, but it could also extend your lifespan, take away your job, or end human soldiers fighting the world's wars.
SunTrust says AI has the capability to change nearly every industry. The moves we are seeing now are just the beginnings, the low hanging fruits. Cities can become smarter, TSA might be scanning your face as you pass through security and doctors could give most of their consultations via your phone thanks to increased AI advancements.
NVIDIA
SunTrust estimates the AI business will be about $47.250 billion by the year 2020. Nvidia, a large player in the AI space thanks to its GPU hardware and CUDA software platform, is a bit more conservative. It only sees AI as a $30 billion business, which is four times the current size of Nvidia.
There is no doubt AI is a huge opportunity, but there are a few companies you should watch if you're an investor looking to enter the AI space, according to SunTrust.
One is for sure. AI is exciting, sometimes scary, but ultimately, here to stay. We are just starting to see the implications of the technology, and the world is likely to change for good and bad because of artificial intelligence.
See the original post:
Artificial intelligence is going to change every aspect of ...
Posted in Artificial Intelligence
Comments Off on Artificial intelligence is going to change every aspect of …
4 fears an AI developer has about artificial intelligence – MarketWatch – MarketWatch
Posted: at 11:13 pm
As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. Its perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, Matrix-like, as some sort of human battery.
And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become the destroyer of worlds, as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?
I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldnt avoid asking: As an AI expert, what do I fear about artificial intelligence?
The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in 2001: A Space Odyssey, is a good example of a system that fails because of unintended consequences. In many complex systems the RMS Titanic, NASAs space shuttle, the Chernobyl nuclear power plant engineers layer many different components together. The designers may have known well how each element worked individually, but didnt know enough about how they all worked together.
That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia a set of relatively small failures combined together to create a catastrophe.
I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.
Systems like IBMs Watson and Googles Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on Jeopardy! or dont defeat a Go master. These arent world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.
But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that to err is human, so it is likely impossible for us to create a truly safe system.
Read: Job of the future is robot psychologist
Im not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.
Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.
Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that well find unintended consequences in simulation, which can be eliminated before they ever enter the real world.
Another possibility thats farther down the line is using evolution to influence the ethics of artificial intelligence systems. Its likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.
While neuroevolution might reduce the likelihood of unintended consequences, it doesnt prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus isnt on determining whether I like or approve of something; it matters only that I can unveil it.
Read: 10 jobs robots already do better than you
Being a scientist doesnt absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.
As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we dont yet know what its capable of. But we do need to decide what the desired outcome of advanced AI is.
One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.
Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected and get surgery performed by a tireless robot with a perfectly steady hand. Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.
Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.
In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research wont change that, though my political self together with the rest of humanity may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the 1% and the rest of us.
Read: Two-thirds of jobs in this city could be automated by 2035
There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?
The key question in this scenario is: Why should a superintelligence keep us around?
I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankinds existence in it probably doesnt matter at all.
But I dont speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.
Fortunately, we need not justify our existence quite yet. We have some time somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldnt just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things as are saying we want to save the planet and successfully doing so.
We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we dont find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.
Now read: 5 ETFs that may let you profit from the next tech revolution
Arend Hintze is an assistant professor of integrative biology & computer science and engineering at Michigan State University. This first appeared on The Conversation What an artificial intelligence researcher fears about AI.
View post:
4 fears an AI developer has about artificial intelligence - MarketWatch - MarketWatch
Posted in Artificial Intelligence
Comments Off on 4 fears an AI developer has about artificial intelligence – MarketWatch – MarketWatch
What an Artificial Intelligence Researcher Fears about AI – Scientific … – Scientific American
Posted: at 11:13 pm
The following essay is reprinted with permission fromThe Conversation, an online publication covering the latest research.
As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. Its perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, Matrix-like, as some sort of human battery.
And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become the destroyer of worlds, as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?
I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldnt avoid asking: As an AI expert, what do I fear about artificial intelligence?
The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in 2001: A Space Odyssey, is a good example of a system that fails because of unintended consequences. In many complex systems the RMS Titanic, NASAs space shuttle, the Chernobyl nuclear power plant engineers layer many different components together. The designers may have known well how each element worked individually, but didnt know enough about how they all worked together.
That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia a set of relatively small failures combined together to create a catastrophe.
I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.
Systems like IBMs Watson and Googles Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on Jeopardy! or dont defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.
But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that to err is human, so it is likely impossible for us to create a truly safe system.
Im not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.
Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.
Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that well find unintended consequences in simulation, which can be eliminated before they ever enter the real world.
Another possibility thats farther down the line is using evolution to influence the ethics of artificial intelligence systems. Its likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.
While neuroevolution might reduce the likelihood of unintended consequences, it doesnt prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.
Being a scientist doesnt absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.
As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we dont yet know what its capable of. But we do need to decide what the desired outcome of advanced AI is.
One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.
Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected and get surgery performed by a tireless robot with a perfectly steady hand. Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.
Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.
In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political self together with the rest of humanity may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.
There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?
The key question in this scenario is: Why should a superintelligence keep us around?
I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankinds existence in it probably doesnt matter at all.
But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.
Fortunately, we need not justify our existence quite yet. We have some time somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldnt just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things as are saying we want to save the planet and successfully doing so.
We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we dont find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.
This article was originally published onThe Conversation. Read the original article.
Read more:
What an Artificial Intelligence Researcher Fears about AI - Scientific ... - Scientific American
Posted in Artificial Intelligence
Comments Off on What an Artificial Intelligence Researcher Fears about AI – Scientific … – Scientific American
Artificial intelligence can make America’s public sector great again – Recode
Posted: at 11:13 pm
Senator Maria Cantwell, D-Wash., just drafted forward-looking legislation that aims to establish a select committee of experts to advise agencies across the government on the economic impact of federal artificial intelligence.
AI meant for U.S. government use should be defined as a network of complementary technologies built with the ability to autonomously conduct, support or manage public sector activity across disciplines.
The move is an early step toward formalizing the exploration of AI in a government context. But it could ultimately contribute to jump-starting AI-focused programs that help stimulate the United States economy, benefit citizens, uphold data security and privacy, and eventually ensure America is successful during the initial introduction of this important technology to U.S. consumers.
The presence of legislation could also lend legitimacy to the prospect of near-term government investment in AI innovation something that may even sway Treasury Secretary Steve Mnuchin and others away from their belief that the impact of AI wont be felt for years to come.
Indeed, other than a few economic impact and policy reports conducted by the Obama Administration led by former U.S. Chief Data Scientist DJ Patil and other tech-minded government leaders this is the first policy effort toward moving the U.S. public sector past acknowledging its significance, and toward fully embracing AI technology.
Its a tall order, one that requires Sen. Cantwell and her colleagues in the Senate to define AI for the federal government, and focus on policies that govern very diverse applications of the technology.
As an emerging technology, the term artificial intelligence means different things to different people. That's why I believe it's essential for the U.S. government to take the first step in defining what AI means in legislation.
AI meant for U.S. government use should be defined as a network of complementary technologies built with the ability to autonomously conduct, support or manage public sector activity across disciplines. All AI-driven government technology should secure and advance the countrys interests. AI should not be formalized as a replacement or stopgap for standard government operations or personnel.
This is important because a central task of the committee will be to look at if AI has displaced more jobs than it has created with this definition, they will be able to make an accurate assessment.
Should the select committee succeed in establishing a federal policy, this will provide a useful benchmark to the private sector on the way that AI should be built and deployed hopefully adopting ethical standards from the start. This should include everything from the diversity of the people building the AI to the data it learns from. Adding value from the beginning, the technology and the people engaging with it need to be held accountable for outcomes of work. This will take collaboration and employee-citizen engagement.
Public-sector AI use offers an opportunity for agencies to better serve Americas diverse citizen population. AI could open up opportunities for citizens to work and engage with government processes and policies in a way that has never been possible before. New AI tools that include voice-activated processes could make areas of government accessible to people with learning, hearing and sight impairments that previously wouldnt have had the opportunity in the past.
The myriad applications of AI-driven technology offers completely different benefits to departments throughout the government, from Homeland Security to the Office of Personnel Management to the Department of Transportation.
Once the government has a handle on AI and legislation is in place, it could eventually offer government agencies opportunities way beyond those in technology
AI could open up opportunities for citizens to work and engage with government processes and policies in a way that has never been possible before.
Filling talent and personnel gaps with technology that can perform and automate specific tasks, revamp citizen engagement through new communication portals and synthesize vital health, economic and public data securely. So, while the introduction of AI will inevitably lead to a situation where some jobs will be replaced by technology, it will also foster a new sector and create jobs in its wake.
For now, businesses, entrepreneurs, and developers around the world will continue to pioneer new AI-driven platforms, technologies and tools for use both in the home and the office from live chat support software to voice-driven technology powering self-driving cars. The private sector is firmly driving the AI revolution with Amazon, Apple, Facebook, IBM, Microsoft and other American companies leading the way. However, it is clear that there is definitely room for the public sector to complement this innovation and for the government to provide the guide rails.
Personally, Ive spent my career developing AI and bot technology. My first bot brought me candy from a tech-company cafe. My last will hopefully help save the world to some extent. I think Sen. Cantwells initiative will set Americas public sector on a similarly ambitious path to bring AI that helps people into the fold and elevate the U.S. as an important contributor to the technologys global development.
Kriti Sharma is the vice president of bots and AI at Sage Group, a global integrated accounting, payroll and payment systems provider. She is the creator of Pegg, the worlds first accounting chatbot, with users in 135 countries. Sharma is a Fellow of the Royal Society of Arts, a Google Grace Hopper Scholar and a Government of India Young Leader in Science. She was recently named to Forbes 30 Under 30 list. Reach her @sharma_kriti.
Continued here:
Artificial intelligence can make America's public sector great again - Recode
Posted in Artificial Intelligence
Comments Off on Artificial intelligence can make America’s public sector great again – Recode
The big problem with artificial intelligence – Axios
Posted: at 11:13 pm
In a discussion with Nevada Gov. Brian Sandoval, Musk also touched on several other topics:
On energy:
Musk noted that it would only take about 100 square miles of solar panels to power the entire United States and the batteries needed to store the energy would only need to take about a square mile. That said, he imagines the energy shifting to a large dose of rooftop solar, some power plant solar, along with wind, hydro and nuclear power.
"It's inevitable," Musk said, speaking of shifting to sustainable energy. "But it matters if it happens sooner or later."
As for those pushing some other type of fusion, Musk notes that the sun is a giant fusion reactor in the sky. "It's really reliable," he said. "It comes up every day. if it doesn't we've got [other] problems."
On artificial intelligence:
Musk said it represents a real existential threat to humanity and a rare example of where regulation needs to be proactive, saying that if it is reactive it could come too late.
"In my opinion it is the biggest risk that we face as a civilization," he said.
No matter what, he said, "there will certainly be a lot of job disruption."
Robots will be able to do everything better than us, I mean all of us. I'm not sure exactly what to do about this. This is really like the scariest problem.
On regulation:
"It sure is important to get the rules right," Musk said. "Regulations are immortal. They never die unless somebody actually goes and kills them. A lot of times regulations can be put in place for all the right reasons but nobody goes back and kills them because they no longer make sense."
Musk also focused on the importance of incentives, saying whatever societies incentivize tends to be what happens. "It's economics 101," he said.
On what drives him:
On Tesla's stock price:
Musk said he has been on record several times as saying its stock price "is higher than we have any right to deserve" especially based on current and past performance. "The stock price obviously reflects a lot of optimism on where we will be in the future," he said. "Those expectations sometimes get out of control. I hate disappointing people, I am trying really hard to meet those expectations."
Musk also talked about Trump when answering a question from Axios at the event. More on that here.
Read this article:
Posted in Artificial Intelligence
Comments Off on The big problem with artificial intelligence – Axios
Artificial Intelligence ushers in the era of superhuman doctors – New Scientist
Posted: at 11:13 pm
By Kayt Sukel
THE doctors eyes flit from your face to her notes. How long would you say thats been going on? You think back: a few weeks, maybe longer? She marks it down. Is it worse at certain times of day? Tough to say it comes and goes. She asks more questions before prodding you, listening to your heart, shining a light in your eyes. Minutes later, you have a diagnosis and a prescription. Only later do you remember that fall you had last month should you have mentioned it? Oops.
One in 10 medical diagnoses is wrong, according to the US Institute of Medicine. In primary care, one in 20 patients will get a wrong diagnosis. Such errors contribute to as many as 80,000 unnecessary deaths each year in the US alone.
These are worrying figures, driven by the complex nature of diagnosis, which can encompass incomplete information from patients, missed hand-offs between care providers, biases that cloud doctors judgement, overworked staff, overbooked systems, and more. The process is riddled with opportunities for human error. This is why many want to use the constant and unflappable power of artificial intelligence to achieve more accurate diagnosis, prompt care and greater efficiency.
AI-driven diagnostic apps are already available. And its not just Silicon Valley types swapping clinic visits for diagnosis via smartphone. The UK National Health Service (NHS) is trialling an AI-assisted app to see if it performs better than the existing telephone triage line. In the US and
Link:
Artificial Intelligence ushers in the era of superhuman doctors - New Scientist
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence ushers in the era of superhuman doctors – New Scientist
What an artificial intelligence researcher fears about AI – San Francisco Chronicle
Posted: July 14, 2017 at 5:13 am
(The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts.)
Arend Hintze, Michigan State University
(THE CONVERSATION) As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. Its perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, Matrix-like, as some sort of human battery.
And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become the destroyer of worlds, as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?
I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldnt avoid asking: As an AI expert, what do I fear about artificial intelligence?
The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in 2001: A Space Odyssey, is a good example of a system that fails because of unintended consequences. In many complex systems the RMS Titanic, NASAs space shuttle, the Chernobyl nuclear power plant engineers layer many different components together. The designers may have known well how each element worked individually, but didnt know enough about how they all worked together.
That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia a set of relatively small failures combined together to create a catastrophe.
I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.
Systems like IBMs Watson and Googles Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on Jeopardy! or dont defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.
But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that to err is human, so it is likely impossible for us to create a truly safe system.
Im not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.
Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.
Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that well find unintended consequences in simulation, which can be eliminated before they ever enter the real world.
Another possibility thats farther down the line is using evolution to influence the ethics of artificial intelligence systems. Its likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.
While neuroevolution might reduce the likelihood of unintended consequences, it doesnt prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.
Being a scientist doesnt absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.
As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we dont yet know what its capable of. But we do need to decide what the desired outcome of advanced AI is.
One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.
Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected and get surgery performed by a tireless robot with a perfectly steady hand. Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.
Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.
In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political self together with the rest of humanity may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.
There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?
The key question in this scenario is: Why should a superintelligence keep us around?
I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankinds existence in it probably doesnt matter at all.
But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.
Fortunately, we need not justify our existence quite yet. We have some time somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldnt just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things as are saying we want to save the planet and successfully doing so.
We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we dont find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.
This article was originally published on The Conversation. Read the original article here: http://theconversation.com/what-an-artificial-intelligence-researcher-fears-about-ai-78655.
Go here to read the rest:
What an artificial intelligence researcher fears about AI - San Francisco Chronicle
Posted in Artificial Intelligence
Comments Off on What an artificial intelligence researcher fears about AI – San Francisco Chronicle
India’s Infosys eyes artificial intelligence profits – Phys.Org
Posted: at 5:13 am
July 14, 2017
Indian IT giant Infosys said Friday that artificial intelligence was key to future profits as it bids to satisfy clients' demands for innovative new technologies.
India's multi-billion-dollar IT outsourcing sector has long been one of the country's flagship industries. But as robots and automation grow in popularity its companies are under pressure to reinvent themselves.
"We are revealing new growth with services that we (have been) focusing on for the past couple of years includingAI (artificial intelligence) and cloud computing," said Infosys chief executive Vishal Sikka, announcing a small rise in quarterly profits.
"Going forward, we will count on strong growth coming from these services," added Sikka, who signalled his intent by arriving at the press conference in a driverless golf cart.
Infosys reported an increase of 1.4 percent in consolidated net profit year-on-year for the first quarter, marginally beating analysts' expectations.
Net profit in the three months to June 30 came in at 34.83 billion rupees (540 million), marginally above the 34.36 billion rupees it reported in the same period last year, Infosys said.
India's $150-billion IT sector is facing upheaval in the face of automation and US President Donald Trump's clampdown on visas, with reports of mass redundancies.
Industry body Nasscom recently called on companies to teach employees new skills after claims they had failed to keep up with new technologies.
In April Infosys launched a platform called Nia to "help clients embrace AI".
"Nia continues to be central to all our conversations with clients as we work with them to transform their businesses," the company said in its earnings statement Friday.
Analysts surveyed by Bloomberg had expected profits of 34.3 billion rupees.
Infosys announced revenues of 170.78 billion rupees, marginally up from the 167.8 billion rupees reported for the same period last year.
Its shares rose nearly 3 percent in early trade after the company forecast revenue growth of between 6.5 to 8.5 percent for the current financial year.
Explore further: India's TCS profits fall by 6 percent
2017 AFP
India's largest IT services firm Tata Consultancy Services reported a nearly 6 percent fall in quarterly earnings Thursday owing to a strengthening rupee, the company said.
Indian software giant Infosys cut its annual earnings outlook for the second time in just three months Friday, sending shares down almost three percent, as cautious clients rein in spending.
Indian software giant Infosys Technologies reported a five percent rise in quarterly net profits on Tuesday, aided by a weak rupee and strong demand from the United States.
Infosys shares plunged more than nine percent on the Bombay Stock Exchange Friday after the Indian software giant cut its earnings outlook for the year.
Indian software giant Infosys announced Friday a better-than-expected 13 percent jump in third-quarter net profit, helped by strong demand for services in the United States.
Indian software giant Infosys Technologies saw its shares dip nearly seven percent Friday after it reported a single digit rise in yearly revenues and also missed quarterly profit estimates.
Microsoft has ended support for its Windows 8 smartphones, as the US tech giant focuses on other segments, amid ongoing speculation about its strategy for mobile.
Pilotless aircraft, flying electric vehicles and bespoke air cabins are the future of flight, Airbus said Thursday.
A glove fitted with wearable electronics can translate the American Sign Language alphabet and then wirelessly transmit the text for display on electronic devicesall for less than $100, according to a study published July ...
Dutch researchers unveiled Tuesday a model of what could become within two decades a floating mega-island to be used as a creative solution for accommodating housing, ports, farms or parks.
Microsoft wants to extend broadband services to rural America by turning to a wireless technology that uses the buffer zones separating individual television channels in the airwaves.
What's the point of smart assistants and intelligent electricity meters if people don't use them correctly? In order to cope with the energy transition, we need a combination of digital technologies and smart user behaviour ...
Please sign in to add a comment. Registration is free, and takes less than a minute. Read more
Read the rest here:
India's Infosys eyes artificial intelligence profits - Phys.Org
Posted in Artificial Intelligence
Comments Off on India’s Infosys eyes artificial intelligence profits – Phys.Org
Artificial intelligence helps scientists map behavior in the fruit fly … – Science Magazine
Posted: at 5:13 am
Examples of eight fruit fly brains with regions highlighted that are significantly correlated with (clockwise from top left) walking, stopping, increased jumping, increased female chasing, increased wing angle, increased wing grooming, increased wing extension, and backing up.
Kristin Branson
By Ryan CrossJul. 13, 2017 , 1:00 PM
Can you imagine watching 20,000 videos, 16 minutes apiece, of fruit flies walking, grooming, and chasing mates? Fortunately, you dont have to, because scientists have designed a computer program that can do it faster. Aided by artificial intelligence, researchers have made 100 billion annotations of behavior from 400,000 flies to create a collection of maps linking fly mannerisms to their corresponding brain regions.
Experts say the work is a significant step toward understanding how both simple and complex behaviors can be tied to specific circuits in the brain. The scale of the study is unprecedented, says Thomas Serre, a computer vision expert and computational neuroscientist at Brown University. This is going to be a huge and valuable tool for the community, adds Bing Zhang, a fly neurobiologist at the University of Missouri in Columbia. I am sure that follow-up studies will show this is a gold mine.
At a mere 100,000 neuronscompared with our 86 billionthe small size of the fly brain makes it a good place to pick apart the inner workings of neurobiology. Yet scientists are still far from being able to understand a flys every move.
To conduct the new research, computer scientist Kristin Branson of the Howard Hughes Medical Institute in Ashburn, Virginia, and colleagues acquired 2204 different genetically modified fruit fly strains (Drosophila melanogaster). Each enables the researchers to control different, but sometimes overlapping, subsets of the brain by simply raising the temperature to activate the neurons.
Then it was off to the Fly Bowl, a shallowly sloped, enclosed arena with a camera positioned directly overhead. The team placed groups of 10 male and 10 female flies inside at a time and captured 30,000 frames of video per 16-minute session. A computer program then tracked the coordinates and wing movements of each fly in the dish. The team did this about eight times for each of the strains, recording more than 20,000 videos. That would be 225 straight days of flies walking around the dish if you watched them all, Branson says.
Next, the team picked 14 easily recognizable behaviors to study, such as walking backward, touching, or attempting to mate with other flies. This required a researcher to manually label about 9000 frames of footage for each action, which was used to train a machine-learning computer program to recognize and label these behaviors on its own. Then the scientists derived 203 statistics describing the behaviors in the collected data, such as how often the flies walked and their average speed. Thanks to the computer vision, they detected differences between the strains too subtle for the human eye to accurately describe, such as when the flies increased their walking pace by a mere 5% or less.
When we started this study we had no idea how often we would see behavioral differences, between the different fly strains, Branson says. Yet it turns out that almost every strain98% in allhad a significant difference in at least one of the behavior statistics measured. And there were plenty of oddballs: Some superjumpy flies hopped 100 times more often than normal; some males chased other flies 20 times more often than others; and some flies practically never stopped moving, whereas a few couch potatoes barely budged.
Then came the mapping. The scientists divided the fly brain into a novel set of 7065 tiny regions and linked them to the behaviors they had observed. The end product, called the Browsable Atlas of Behavior-Anatomy Maps, shows that some common behaviors, such as walking, are broadly correlated with neural circuits all over the brain, the team reports today in Cell. On the other hand, behaviors that are observed much less frequently, such as female flies chasing males, can be pinpointed to tiny regions of the brain, although this study didnt prove that any of these regions were absolutely necessary for those behaviors. We also learned that you can upload an unlimited number of videos on YouTube, Branson says, noting that clips of all 20,000 videos are available online.
Branson hopes the resource will serve as a launching pad for other neurobiologists seeking to manipulate part of the brain or study a specific behavior. For instance, not much is known about female aggression in fruit flies, and the new maps gives leads for which brain regions might be driving these actions.
Because the genetically modified strains are specific to flies, Serre doesnt think the results will be immediately applicable to other species, such as mice, but he still views this as a watershed moment for getting researchers excited about using computer vision in neuroscience. I am usually more tempered in my public comments, but here I was very impressed, he says.
See the rest here:
Artificial intelligence helps scientists map behavior in the fruit fly ... - Science Magazine
Posted in Artificial Intelligence
Comments Off on Artificial intelligence helps scientists map behavior in the fruit fly … – Science Magazine
Artificial Intelligence Will Help Hunt Daesh By December – Breaking Defense
Posted: at 5:13 am
Daesh fighters
THE NEWSEUM: Artificial intelligence is coming soon to a battlefield near you with plenty of help from the private sector. Within six months the US military will start using commercial AI algorithms to sort through its masses of intelligence data on the Islamic State.
We will put an algorithm into a combat zone before the end of this calendar year, and the only way to do that is with commercial partners, said Col. Drew Cukor.
Air Force intelligence analysts at work.
Millions of Humans?
How big a deal is this? Dont let the lack of generals stars on Col. Cukors shoulders lead you to underestimate his importance. He heads the Algorithmic Warfare Cross Function Team, personally created by outgoing Deputy Defense Secretary Bob Work to apply AI to sorting the digital deluge of intelligence data.
This isnt a multi-year program to develop the perfect solution: The state of the art is good enough for the government, he saidat the DefenseOne technology conference here this morning. Existing commercial technology can be integrated onto existing government systems.
Were not talking about three million lines of code, Cukor said. Were talking about 75 lines of code placed inside of a larger software (architecture) that already exists for intelligence-gathering.
For decades, the US military has invested in better sensors to gather more intelligence, better networks to transmit that data, and more humans to stare at the information until they find something. Our work force is frankly overwhelmed by the amount of data, Cukor said. The problem, he noted, is staring at things for long periods of time is clearly not what humans were designed for. U.S. analysts cant get to all the data we collect, and we cant calculate how much their bleary eyes miss of what they do look at.
We cant keep throwing people at the problem. At the National Geospatial Intelligence Agency, for example, NGA mission integration director Scott Currie told the conference, if we looked at the proliferation of the new satellites over time, and we continue to do business the way we do, wed have to hire two million more imagery analysts.
Rather than hire the entire population of, say, Houston, Currie continued, we need to move towards services and algorithms and machine learning, (but) We need industrys help to get there because we cannot possibly do it ourselves.
Private Sector Partners
Cukors task force is now spearheading this effort across the Defense Department. Were working with him and his team, said Dale Ormond, principal director for research in the Office of the Secretary of Defense. Were bringing to bear the combined expertise of our laboratory system across the Department of Defense complex.
Were holding a workshop in a couple of weeksto baseline where we are both in industry and with our laboratories, Ormond told the conference. Then were going to have a closed door session (to decide) what are the investments we need to make as a department, what is industry doing (already).
Just as the Pentagon needs the private sector to lead the way, Cukor noted, many promising but struggling start-ups need government funding to succeed. While Tesla, Google, GM, and other investors in self-driving cars are lavishly funding work on artificial vision for collision avoidance, theres a much smaller commercial market for other technologies such as object recognition. All a Google Car needs to know about a vehicle or a building is how to avoid crashing into it. A military AI needs to know whether its a civilian pickup or an ISIS technical with a machinegun in the truck bed, a hospital or a hideout.
An example of the shortcomings of artificial intelligence when it comes to image recognition. (Andrej Karpathy, Li Fei-Fei, Stanford University)
These are not insurmountable problems, Cukor emphasized. The Algorithmic Warfare project is focused on defeating Daesh, he said, not on recognizing every weapon and vehicle in, say, the Russian order of battle. He believes there are only about 38 classes of objects the software will need to distinguish.
Its not easy to program an artificial intelligence to tell objects apart, however. Theres no single Platonic ideal of a terrorist you can upload for the AI to compare real-life imagery against. Instead, modern machine learning techniques feed the AI lots of different real-world data the more the better until it learns by trial and error what features every object of a given type has in common. Its basically the way a toddler learns the difference between a car and a train (protip: count the wheels). This process goes much faster when humans have already labeled what data goes in what category.
These algorithms need large data sets, and were just starting labeling, Cukor said. Its just a matter of how big our labeled data sets can get. Some of this labeling must be done by government personnel, Cukor said; he didnt say why, but presumably this includes the most highly classified material. But much of it is being outsourced to a significant data-labeling company, which he didnt name.
This all adds up to a complex undertaking on a tight timeline something the Pentagon historically does not do well. I wish we could buy AI like we buy lettuce at Safeway, where we can walk in, swipe a credit card, and walk out, Cukor said. There are no shortcuts.
Go here to read the rest:
Artificial Intelligence Will Help Hunt Daesh By December - Breaking Defense
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence Will Help Hunt Daesh By December – Breaking Defense