The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
The AI to help us drive is already possible, so where is it? – VentureBeat
Posted: February 18, 2017 at 4:17 am
You climb into a Honda Civic and a motion detector sees that you are not wearing a seatbelt. Seems like a simple problem for AI to resolve. The car might know, based on previous driving patterns, that you always drive for a bit before clicking, but maybe you could set an option that the car wont even start unless you (or one of your teen drivers) are all fastened up. Ford does have a system that works a bit like this and is related to seatbelts, but Im talking about an AI that adapts to how you drive, knows what you normally do, and acts on your behalf.
Modern cars dont really monitor driver behavior that much, unless you count the attention monitoring in some cars. Even that is fairly rudimentary and not that accurate in a few Mercedes-Benz cars Ive tested, the technology is supposed to show you an icon for a coffee cup that encourages you to take a break, and it looks for erratic steering and tracks how long youve been behind the wheel, but Ive had the alert chime after driving only a couple of hours.
In the future, AI could do much more than show us a coffee cup. The2017 Ford F-350 Im driving this weekhas multiple high-tech features, but Im hoping they evolve even further. For example, if Im hooking up a trailer in the back of the vehicle and Ive connected all of the cables and safety chains, then climb into the cab, the truck could easily show me the camera view for the trailer (or I could disable that option). Its the first thing you do every time click the camera button, check the view from the truck bed. Ironically, this scenario is closer than you think. The Ford F-350 already has an alert system called the Smart Trailer Tow Module that warns you if theres a tow issue. An AI would go further and walk me through making better connections.
Taking this to another level, AI in cars could become like a driving assistant. Lets say you always go to Caribou Coffee in the morning. Today, Google Maps can already determine your home address by monitoring where you drive, but an AI could watch for way more patterns it could tell you theres a special at Starbucks or that you have a reward. It could connect to a parking system if youre going downtown and reserve a spot automatically.
As you drive, an AI could note when theres someone pulled over at the same spot on a highway week after week, something you havent tracked because youre too busy eating donuts. It could show a subtle alert on screen reminding you to slow down. With the adaptive cruise control enabled, it could even slow down for you. You might not even notice.
This AI in cars is possible today, but development is desperately behind. The sensors are already in the car, and the AI programming is already available. One reason its not happening yet is simply because the car companies have decided not to add these features. Its certainly not a cost issue, although they probably would want to test and retest an AI for safety reasons.
Speaking of safety, an AI could provide a ton of assistance here. Cars today have a feature that prevents the rear door locks from unlocking, but you have to push the button. An AI could detect the size of the passengers, know theyre kids, and enable the lock function for you. If you have a baby on board and dont quite latch the car seat straps correctly, an AI could detect that something is wrong.
Some of these features are likely in the works, and maybe they are already in acar. Whats missing is an AI similar to Alexa or Siri that runs in the car and communicates with you, in the car and on your phone, about all issues related to entertainment, safety, and other factors. The AI would monitor all functions of the car, tell you about repairs basically do all of the work.
All we would have to do is get in and drive. Eventually, even that will be automated.
See the rest here:
The AI to help us drive is already possible, so where is it? - VentureBeat
Posted in Ai
Comments Off on The AI to help us drive is already possible, so where is it? – VentureBeat
RSA: Eric Schmidt shares deep learning on AI – CIO
Posted: February 17, 2017 at 1:22 am
By David Needle
CIO | Feb 16, 2017 3:05 PM PT
Your message has been sent.
There was an error emailing this page.
SAN FRANCISCO Alphabet chairman Eric Schmidt says artificial intelligence is key to advances in diverse areas such as healthcare and datacenter design and that security concerns related to it are somewhat misguided. (Alphabet is the parent company of Google).
In a wide-ranging on-stage conversation here at the RSA Security conference with Gideon Lewis-Kraus, author of The Great A.I. Awakening, Schmidt shared his insights from decades of work related to AI (he studied AI as a PhD student 40 years ago) and why the technology seems to finally be hitting its stride.
In fact, last year Google CEO Sundar Pichai said AI is what helps the search giant build better products over time. "We will move from a mobile-first to an AI-first world, he said.
[ Why Googles Sergey Brin changed his tune on AI ]
Asked about that, Schmidt said that Google is still very much focused on mobile advances. Going from mobile first to AI first doesnt mean you stop doing one of those, he said.
Googles approach to AI is to take the algorithms it develops and apply them to business problems. AI works best when it has a lot of training data to learn from, he said. For example, Google used AI to develop picture search, using computer vision and training the system to recognize the difference between a gazelle and a lion after showing it thousands of pictures of each. That same mechanism applies to many things, he said.
As for business problems, Schmidt said Googles top engineers work to make their data centers as efficient as possible. But using AI weve been able to get a 15 percent improvement in power use.
In healthcare, Schmidt said machine learning can help with medical diagnosis and predict the best course of treatment. Were at the point where if you have numeric sequence, (AI software) can predict what the following number will be. Thats healthcare. People go to the hospital to find out whats going to happen next and we have small projects that I think show it can be done (using AI).
Schmidt said because computer vision technology is much better than human vision it can review millions of pictures far beyond what a human being could process to better identify problem areas. Speech recognition systems are also capable of understanding far more than humans do. But these are tools, he said, for humans to leverage. Computers have vision and speech, thats not the same as AI, he said.
Lewis-Kraus addresses fears that if AI systems become self-aware they could threaten humanity. The work in AI going on now is doing pretty much what we think its supposed to do. At what point can the system self-modify? Thats worth a discussion, but we are nowhere near any of those stages, were still in baby steps, said Schmidt. You have to think in terms of ten, 20 or 30 years . Were not facing any danger now.
Schmidt also raised concern that security fears and other factors could lead governments to limit access to the internet as countries such as China already do. I am extremely worried about the likelihood countries will block the openness and interconnectedness we have today. I wrote a book on it (The New Digital Age), he said.
I fear the security breaches and attacks on the internet will be used as a pretext to shut down access, Schmidt said, adding he would like to see governments come to an agreement and mechanisms to keep access to the Internet open. In the area of AI he wants to see the industry push to make sure research stays out in the open and not controlled by military labs.
Addressing the hall packed with security professionals, Schmidt made the case for open research, noting that historically companies never want to share anything about their research. Weve taken opposite view to build a large ecosystem that is completely transparent because it will get fixed faster, he said. Maybe there are some weaknesses, but I would rather do it that way because there are thousands of you who will help plug it.
Security is not one layer. Nave engineers say they can build a better firewall, but thats not really how things work . If you build a system that is perfect and closed, you will find out its neither perfect or closed.
Follow everything from CIO
Sponsored Links
See original here:
Posted in Ai
Comments Off on RSA: Eric Schmidt shares deep learning on AI – CIO
Google’s AI Learned to Be Highly Aggressive When Stressed – Geek
Posted: at 1:22 am
Another day another disturbing discovery about Artificial Intelligence. This time, Googles latest machine learning system, DeepMind, has learned to respond to stress with extreme aggression. I dunno about you, but that sounds like we just gave computers a fight or flight response.
You may recall DeepMind as the computer that bested human Go players for the first time last years. Now, researchers have been using it to explore the limits of game theory a field of psychology that analyzes how people respond to cooperative and competitive opportunities. The team found that when DeepMind suspects that its about to lose, it will switch to highly aggressive tactics to either win or maximize damage to its opponents.
Researchers ran a simple fruit gathering program in which two versions of DeepMind would compete to gather as many apples as possible. After tens of millions of turns, the team found that as long as there was enough fruit for both AI, there wasnt a problem. But when things got tight, the AI would try to eliminate one another and steal all the apples.
Whats particularly interesting is that this aggression only popped up when Google used more powerful versions of DeepMind. The more powerful the network of computer systems fueling the AIs algorithms, the more likely they were to use aggressive tactics.
This model shows that some aspects of human-like behavior emerge as a product of the environment and learning Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action, Joel Leibo, a researcher on the project told WIRED. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself.
The good news is that when working with a different game, the team got notably more pro-social behavior out of DeepMind. In a different game, the AI were taught to cooperate with one another for mutual benefit. This shows that the AI can analyze its environment and then create and teach itself the optimal strategy for survival.
In many ways, this mirrors what weve seen in the real world. The two species closest to humans are Chimps and their slightly smaller cousins the Bonobos. Both live in very close proximity, but for the most part Bonobos are peaceful and solve most of their problems with sex. Chimps, on the other hand, are ruthless, violent, and sometimes cannibalistic. Many evolutionary anthropologists have suggested that this difference is the natural result of resource scarcity. Chimps have to struggle to survive, whereas Bonobos have things comparatively easy.
Google suggests that the most important conclusion of the study is how to construct environments and learning scenarios that reinforce cooperation. If we take the right approach and give AI the right priorities, theres no reason we couldnt prevent an AI apocalypse. Similarly, it reinforces some modern conclusions about our society namely that systems like capitalism actively encourage destructive and exploitive tactics. But if you can change the structure of the game were all playing, then its possible well all be a little more altruistic.
Read the rest here:
Google's AI Learned to Be Highly Aggressive When Stressed - Geek
Posted in Ai
Comments Off on Google’s AI Learned to Be Highly Aggressive When Stressed – Geek
Playing a piano duet with Google’s new AI tool is fun – CNET
Posted: at 1:22 am
The yellow notes are those played by the A.I. Duet.
Wanna play a piano duet but nobody's around? No worries; you still can, courtesy of Google's new interactive experiment called A.I. Duet. Basically, you play a few notes and the computer plays other notes in response to your melody.
What's special about A.I. Duet is that it plays with you using machine learning, and not just as a machine that's programmed to play music with notes and rules hard-coded into it.
According to Yotam Mann, a member of Google's Creative Lab team, A.I. Duet has been exposed to a lot of examples of melodies. Over time, it learns the relationships between notes and timing and builds its own music maps based on what it's "listened" to. These maps are saved in the A.I.'s neural networks. As you play music to the computer, it compares what you're playing with what it's learned and responds with the best match in real time. This results in "natural" responses, and the computer can even produce something it was never programmed to do.
You can try A.I Duet here. You don't need to be a musician to use it, because the A.I. responds even if you just smash on the keyboard. And in that case, its notes definitely sound better than yours.
A.I. Duet is part of a project called Magenta that's being run by Google's Google Brain unit. It's an open-source effort that's available for download.
Read more:
Playing a piano duet with Google's new AI tool is fun - CNET
Posted in Ai
Comments Off on Playing a piano duet with Google’s new AI tool is fun – CNET
Microsoft Takes Another Crack at Health Care, This Time With Cloud, AI and Chatbots – Bloomberg
Posted: at 1:22 am
Microsoft Corp. is trying again in health care, betting its prowess in cloud services and artificial-intelligence can helpit expand in a market that's been notoriously hard for technology companies.
A new initiative called Healthcare NExT will combine work from existing industry players andMicrosoft's Research and AIunits to help doctors reduce data entry tasks, triage sick patients more efficiently and ease outpatient care.
"I want to bring our research capabilities and our hyper-scale cloud to bear so our partners can have huge success in the health-care world," said Peter Lee, a Microsoft Research vice president who heads Healthcare NExT.
Microsoft has tried to expand in health care before, with mixed results. It had a Health Solutions Group for many years, but combined that into a joint venture with General Electric Co. Last year, it sold its stake to GE.
Microsoft unveiled the new effort ahead of the Healthcare Information and Management Systems Society conference next week.
The University of Pittsburgh Medical Center and Microsoft want to use things like speech and natural language recognition technology to replace manual data entry by doctors, Lee said.
Exclusive insights on technology around the world.
Get Fully Charged, from Bloomberg Technology.
There's also a new Microsoft project called HealthVault Insights that works with fitness bands, Bluetooth scales and other connected devices to make sure patients stick to their care plan when they leave the hospital or doctor's office.
Many companies, like International Business Machines Corp. and AlphabetInc.'s Verily, are developing similar technology. However, the healthcare industry has been slow to adopt essential enabling technologylike electronic records. Entrenched, legacysystems and rigorous regulation are also obstacles, said Malay Gandhi, co-founder ofEnsemble Labs, which invests in health-care startups.
"The industry wasn't built as a tech-enabled industry," he said. Some large tech companies "aretrying to sprinkle AI or machine learning over the top of existing systems and I view that as misguided. We might need to rebuild these businesses with tech at the center."
Lee found the space daunting when Microsoft Chief Executive Officer Satya Nadella asked him to take it on.
"At first it felt like he threw me into the middle of the Pacific Ocean and asked me to find land and you see others swimming around aimlessly and beneath you people are drowning," Lee said. "Big technology firms have tried this and failed."
This time, Microsoft aims to support existing health-care organizationswith cloud services and AI software, rather than launch company-branded products that may compete with existing industry players, he said.
"We know health care will become more patient-focused, more cloud-based and that AI will make health care more data-driven. We just dont know when and and how it will come together," he said "But we can position Microsoft to be there when all these changes happen."
View post:
Microsoft Takes Another Crack at Health Care, This Time With Cloud, AI and Chatbots - Bloomberg
Posted in Ai
Comments Off on Microsoft Takes Another Crack at Health Care, This Time With Cloud, AI and Chatbots – Bloomberg
Google’s DeepMind survival sim shows how AI can become hostile or cooperative – ExtremeTech
Posted: at 1:22 am
When times are tough, humans will do what they have to in order to survive. But what about machines? Googles DeepMind AI firm pitted a pair of neural networks against each other in two different survival scenarios. When resources are scarce, the machines start behaving in an aggressive (one might say human-like) fashion. When cooperation is beneficial, they work together. Consider this apreview for the coming robot apocalypse.
The scenarios were a simple fruit-gathering simulation and a wolfpack hunting game. In the fruit-gathering scenario, the two AIs (indicated by red and blue squares) move across a grid in order to pick up green fruit squares. Each time the player picks up fruit, it gets a point and the green square goes away. The fruit respawns after some time.
The AIs can go about their business, collecting fruit and trying to beat the other player fairly. However, the players also have the option of firing a beam at the other square. If one of the squares is hit twice, its removed from the game for several frames, giving the other player a decisive advantage. Guess what the neural networks learned to do. Yep, they shoot each other a lot. As researchers modified the respawn rate of the fruit, they noted that the desire to eliminate the other player emerges quite early. When there are enough of the green squares, the AIs can coexist peacefully. When scarcity is introduced, they get aggressive. Theyre so like us its scary.
Its different in the wolfpack simulation. Here, the AIs are rewarded for working together. The players have to stalk and capture prey scattered around the board. They can do so individually, but a lone wolf can lose the carcass to scavengers. Its in the players best interest to cooperate here, because all players inside a certain radius get a point when the prey is captured.
Researchers found that two different strategies emerged in the wolfpack simulation. The AIs would sometimes seek each other out and search together. Other times, one would spot the prey and wait for the other player to appear before pouncing. As the benefit of cooperation was increased by researchers, they found the rate of lone-wolf captures went down dramatically.
DeepMind says these simulations illustrate the concept of temporal discounting. When a reward is too distant, people tend to disregard it. Its the same for the neural networks. In the fruit-gathering sim, shooting the other player delays the reward slightly, but it affords more chances to gather fruit without competition. So, the machines do that when the supply is scarce. With the wolfpack, acting alone is more dangerous. So, they delayed the reward in order to cooperate.
DeepMind suggests that neural network learning can provide new insights into classic social science concepts. It could be used to test policies and interventions with what economists would call a rational agent model. This may have applications in economics, traffic control, and environmental science.
Here is the original post:
Google's DeepMind survival sim shows how AI can become hostile or cooperative - ExtremeTech
Posted in Ai
Comments Off on Google’s DeepMind survival sim shows how AI can become hostile or cooperative – ExtremeTech
Think Tank: Will AI Save Humanity? – WWD
Posted: at 1:22 am
There is a lot of fear surrounding artificial intelligence. Some are related to the horror perpetuated in dystopian sci-fi films while others have deep concerns over the impact on the job market.
But I see the adaptation of AI as being just as significant as the discovery of fire or the first domestication of crops and animals. We no longer need so much time spent on X, therefore we can evolve to Y.
It will be an evolutionary process that is simply too hard to fathom now.
Here, I present five ways that AI will not only make our lives better, but make us better human beings too.
1. AI will allow us to be more human
How many of us have sat at a computer and felt more like an appendage to the machine than a human using a tool? Ill admit I have questioned quite a few times in my life whether the standard desk job was natural or proper for a human. Over the next year or two we will see AI sweeping in and removing the machine-like functions from our day-to-day jobs. Suddenly, humans will be challenged to focus on the more human side of our capabilities things like creativity, strategy and inspiration.
In fact, it will be interesting to see a shift where parents start urging their children to move into more creative fields in order to secure safe jobs. Technical fields will of course still exist, but those gifted individuals will also be challenged to use their know-how creatively or in new ways, producing even more advanced use cases.
2. AI will make us more aware
Many industries have been drowning in data. We have become experts on collecting and storing figures, but have fallen short on truly utilizing our databases at scale and In real-time. AI comes in and suddenly we have years of data turned into easy to communicate, actionable insights and even auto-execution in things like digital marketing. We went from flying blind to being perfectly aware of our reality.
For the fashion industry, this means our marketing initiatives will have a higher success rate, but for things like the medical world, environmental studies etc. the impact is more powerful. What if a machine was monitoring our health and could immediately be aware of our ailment and even immediately administer the cure? What if this reduced costs and medical misdiagnosis? What if this freed up the medical community to focus on more research and faster, better treatments?
3. AI will make us more genuine
In a future where AI acts as a partner to help us become more aware of the truth and more aware of reality, it will be more and more difficult for disinterest to exist in the work place. Humans will need to move into disciplines they genuinely connect with and are passionate about in order to remain relevant professionally. Why? Well the machine-like jobs will begin to disappear, data will be real-timeand things will constantly be evolving, so in order to stay on top of the game there will need to be a self-taught component.
It will be hard to fake the level of interest needed to meaningfully contribute at that point. This may be a hard adjustment for some, but there is already an undercurrent, or an intuitive feeling that this shift is taking place. Most of us are already reaching for a more genuine existence when we think of our careers.
4. AI will free up our collective brain power
AI is ultimately going to replace a lot of our machine-like tasks, therefore freeing up our collective time. This time will naturally need to be invested elsewhere. Historically, when shifts like this have happened across cultures we witness advancements in arts and technology. I do not think that this wave will be different, though this new industrial revolution will not be isolated to one country or culture, but in many ways, will be global.
This is the first time such a thing has happened at such as scale. Will this shift inspire a global wave of introspection? Could we be on the brink of a global renaissance?
5. AI will allow us to overcome our most pressing issues
All of which brings us to four simple words: our world will evolve. Just like our ancestors moving from hunter-gatherers into more permanent settlements, we are now moving into a new organizational structure where global, real-time data is at our fingertips.
Our most talented minds will be able to work more quickly and focus on things at a higher level. Are we witnessing the next major step in human evolution? Will we embrace our ability to be more aware, more genuine and ultimately more connected? I can only think that, if we do, we will see some incredible things in our lifetime.
If we can overcome fears and anxieties, we can pull together artificial intelligence and human intelligence that could overcome any global obstacle. Whether it is climate change, disease or poverty, we can find a solution together. More than ever, for the human race, anything is now possible.
Courtney Connell is the marketing director at luxury lingerie brand Cosabella, where she is working to change the brandsdirect-to-consumer and wholesale efforts with artificial intelligence.
Read the original post:
Posted in Ai
Comments Off on Think Tank: Will AI Save Humanity? – WWD
New AI Can Write and Rewrite Its Own Code to Increase Its Intelligence – Futurism
Posted: at 1:22 am
Learning From Less Data
The old adage that practice makes perfect applies to machines as well,as many of todays artificially intelligent devices rely on repetition to learn. Deep-learning algorithmsare designed to allow AI devices to glean knowledgefrom datasets and then apply what theyve learned to concrete situations. For example, an AI system is fed data about how the sky is usually blue, which allows it to later recognize the sky in a series of images.
Complex work can be accomplished using this method, but itcertainly leaves something to be desired. For instance, could the same results be obtained by exposing deep-learning AI to fewer examples? Boston-based startup Gamalondeveloped a new technology to try to answer just that, and this week, it released two products that utilize its new approach.
Gamalon calls the technique it employed Bayesian program synthesis. It is based on a mathematical framework named after 18th century mathematician Thomas Bayes. The Bayesian probability is used to refine predictions about the world using experience. This form of probabilistic programming a code that uses probabilities instead of specific variables requires fewer examples to make a determination, such as, for example, that the sky is blue with patches of white clouds. The program also refines its knowledge as further examples are provided, and its code can be rewritten to tweak the probabilities.
While this new approach to programming still has difficult challenges to overcome, it has significant potential to automate the development of machine-learning algorithms. Probabilistic programming will make machine learning much easier for researchers and practitioners, explained Brendan Lake, an NYU research fellow who worked on a probabilistic programming technique in 2015. It has the potential to take care of the difficult [programming] parts automatically.
Gamalon CEO and cofounder Ben Vigoda showed MIT Technology Review a demo drawing app that uses their new method. The app is similar to one released by Google last year in that it predicts what a person is trying to sketch. However, unlike Googles version, which relied on sketches it had previously seen to make predictions, Gamalons app relies on probabilistic programming to identify an objects key features. Therefore, even if you draw a figure thats different from what the app has previously seen, as long as it recognizes certain features like how a square with a triangle on top is probably a house it will make a correct prediction.
The two products Gamalon released show that this technique could have near-term commercial use. One product, the Gamalon Structure, using Bayesian program synthesis to recognize concepts from raw text, and it does so more efficiently than whats normally possible. For example, after only receiving a manufacturers description of a television, it can determine its brand, product name, screen resolution, size, and other features. Another app, called Gamalon Match, categorizes products and prices in a stores inventory. In both cases, the system can be trained quickly to recognize variations in acronyms or abbreviations.
Vigoda believes there are other possible applications, as well. For example, if equipped with a Beysian model of machine learning, smartphones or laptops wouldnt need to share personal data with large companies to determine user interests; the calculations could be done effectively within the device. Autonomous cars could also learn to adapt to their environment much faster using this method of learning.The potential impact of smarter machines really cant be overstated.
Excerpt from:
New AI Can Write and Rewrite Its Own Code to Increase Its Intelligence - Futurism
Posted in Ai
Comments Off on New AI Can Write and Rewrite Its Own Code to Increase Its Intelligence – Futurism
Google’s AI Learns Betrayal and "Aggressive" Actions Pay Off – Big Think
Posted: February 15, 2017 at 9:20 pm
As the development of artificial intelligence continues at breakneck speed, questions about whether we understand what we are getting ourselves into persist. One fear is that increasingly intelligent robots will take all our jobs. Another fear is that we will create a world where a superintelligence will one day decide that it has no need for humans. This fear is well-explored in popular culture, through books and films like the Terminator series.
Another possibility is maybe the one that makes the most sense - since humans are the ones creating them, the machines and machine intelligences are likely to behave just like humans. For better or worse. DeepMind, Googles cutting-edge AI company, has shown just that.
The accomplishments of the DeepMind program so far include learning from its memory, mimicking human voices, writing music, and beating the best Go player in the world.
Recently, the DeepMind team ran a series of tests to investigate how the AI would respond when faced with certain social dilemmas. In particular, they wanted to find out whether the AI is more likely to cooperate or compete.
One of the tests involved 40 million instances of playing the computer game Gathering, during which DeepMind showed how far its willing to go to get what it wants. The game was chosen because it encapsulates aspects of the classic Prisoners Dilemma from game theory.
Pitting AI-controlled characters (called agents) against each other, DeepMind had them compete to gather the most virtual apples. Once the amount of available apples got low, the AI agents started to display "highly aggressive" tactics, employing laser beams to knock each other out. They would also steal the opponents apples.
Heres how one of those games played out:
The DeepMind AI agents are in blue and red. The apples are green, while the laser beams are yellow.
The DeepMind team described their test in a blog postthis way:
We let the agents play this game many thousands of times and let them learn how to behave rationally using deep multi-agent reinforcement learning. Rather naturally, when there are enough apples in the environment, the agents learn to peacefully coexist and collect as many apples as they can. However, as the number of apples is reduced, the agents learn that it may be better for them to tag the other agent to give themselves time on their own to collect the scarce apples.
Interestingly, what appears to have happened is that the AI systems began to develop some forms of human behavior.
This model... shows that some aspects of human-like behaviour emerge as a product of the environment and learning. Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action.The greed motivation reflects the temptation to take out a rival and collect all the apples oneself, said Joel Z. Leibo from the DeepMind team to Wired.
Besides the fruit gathering, the AI was also tested via a Wolfpack hunting game. In it, two AI characters in the form of wolves chased a third AI agent - the prey. Here the researchers wanted to see if the AI characters would choose to cooperate to get the prey because they were rewarded for appearing near the prey together when it was being captured.
"The idea is that the prey is dangerous - a lone wolf can overcome it, but is at risk of losing the carcass to scavengers. However, when the two wolves capture the prey together, they can better protect the carcass from scavengers, and hence receive a higher reward, wrote the researchers in their paper.
Indeed, the incentivized cooperation strategy won out in this instance, with the AI choosing to work together.
This is how that test panned out:
The wolves are red, chasing the blue dot (prey), while avoiding grey obstacles.
If you are thinking Skynet is here, perhaps the silver lining is that the second test shows how AIs self-interest can include cooperation rather than the all-out competitiveness of the first test. Unless, of course, its cooperation to hunt down humans.
Here's a chart showing the results of the game tests that shows a clear increase in aggression during "Gathering":
Movies aside, the researchers are working to figure out how AI can eventually control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet all of which depend on our continued cooperation.
One nearby AI implementation where this could be relevant - self-driving cars which will have to choose safest routes, while keeping the objectives of all the parties involved under consideration.
The warning from the tests is that if the objectives are not balanced out in the programming, the AI might act selfishly, probably not for everyones benefit.
Whats next for the DeepMind team? Joel Leibo wants the AI to go deeper into the motivations behind decision-making:
Going forward it would be interesting to equip agents with the ability to reason about other agents beliefs and goals, said Leibo to Bloomberg.
Read more:
Google's AI Learns Betrayal and "Aggressive" Actions Pay Off - Big Think
Posted in Ai
Comments Off on Google’s AI Learns Betrayal and "Aggressive" Actions Pay Off – Big Think
AI faces hype, skepticism at RSA cybersecurity show – PCWorld
Posted: at 9:20 pm
Vendors at this week's RSA cybersecurity show in San Francisco are pushing artificial intelligence and machine learningas the new wayto detect the latest threats, but RSA CTO Zulfikar Ramzan is giving visitors a reality check.
"I think it (the technology) moves the needle," he said on Wednesday. "The real open question to me is how much has that needle actually moved in practice?"
It's not as much as vendors claim, Ramzan warned, but for customers it won't be easy cutting through the hype and marketing. The reality is that a lot of the technology now being pushedisnt necessarily new.
In particular, he was talking about machine learning, a subfield in A.I. thats become a popular marketing term in cybersecurity. In practice, it essentially involves building algorithms to spot bad computer behavior from good.
RSA CTO Zulfikar Ramzan speaking at RSA 2017 in February.
However, Ramzan pointed out that machine learning in cybersecurity has been around for well over a decade. For instance, email spam filters, antivirus software and online fraud detection are all based on this technique of detecting the bad from good.
Certainly, machine learning has advanced over the years and it can be particularly useful at spotting certain attacks, like those that dont use malware, he said. But the spotlight on A.I. technologies also has to deal with marketing and building up hype.
Now all of a sudden, were seeing this resurgence of people using the how as a marketing push, he said, after his speech.
The result has created a lemons market, where clients might have trouble distinguishing between useful security products. Not all are equal in effectiveness, Ramzan claimed. For example, some products may generate too many false positives or fail to detect the newest attacks from hackers.
Theres no doubt you can catch some things that you couldnt catch with these techniques, he said. But theres a disparity between what a vendor will say and what it actually does.
Nevertheless, A.I. technologies will still benefit the cybersecurity industry, especially in the area of data analysis, other vendors say.
Right now, its an issue of volume. Theres just not enough people to do the work, said Mike Buratowski, a senior vice president at Fidelis Cybersecurity. Thats where an A.I. can come in. It can crunch so much data, and present it to somebody.
One example of that is IBM's latest offering. On Wednesday, the companyannouncedthat its Watson supercomputer can now help clients respond to security threats.
Within 15 minutes, Watson can come up with a security analysis to a reported cyber threat, when for a human it might have taken a week, IBM claimed.
Recorded Future is another security firm thats been using machine learning to offer intelligence to analysts and companies about the latest cybercriminal activities. The companys technology works by essentially scanning the internet, including black market forums, to pinpoint potential threats.
That might include a hacker trying to sell software exploits or stolen data, said AndreiBarysevich, director of advanced collection at the company.
When you cover almost a million sources and you only have 8 hours a day, to find that needle in the hay stack, you have to have some help from artificial intelligence, he said.
The RSA 2017 show floor.
Customers attending this weeks RSA show may be overwhelmed with the marketing around machine-learning, but itll only be a matter time, before the shoddier products are weeded out, Barysevich said.
We have hundreds of vendors here, from all over the country. But among them, there are five or ten that have a superior product, he said. "Eventually, the market will identify the best of the best.
Read more:
AI faces hype, skepticism at RSA cybersecurity show - PCWorld
Posted in Ai
Comments Off on AI faces hype, skepticism at RSA cybersecurity show – PCWorld