The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
The ‘Skynet’ Gambit – AI At The Brink – Seeking Alpha
Posted: July 29, 2017 at 7:14 pm
"The deployment of full artificial intelligence could well mean the end of the human race." - Stephen Hawking
"He can know his heart, but he don't want to. Rightly so. Best not to look in there. It ain't the heart of a creature that is bound in the way that God has set for it. You can find meanness in the least of creatures, but when God made man the devil was at his elbow. A creature that can do anything. Make a machine. And a machine to make the machine. And evil that can run itself a thousand years, no need to tend it." - Cormac McCarthy, Blood Meridian: Or the Evening Redness in the West
Let me declare at the outset that this article has been tough to write. I am by birthright an American, an optimist and a true believer in our innovative genius and its power to drive better lives for us and the world around us. Ive grown up in the mellow sunshine of Moores law, and lived first hand in a world of unfettered innovation and creativity. That is why it is so difficult to write the following sentence:
Its time for federal regulation of AI and IoT technologies.
I say that reluctantly but with growing certainty. I have come to believe that we share a moral obligation to act now in order to protect our children and grandchildren. We need to take this moment, wake up, and listen to the voices that are warning us that the confluence of technologies that power the AI revolution are advancing so rapidly that they pose a clear and present danger to our lives and well-being.
So this article is about why I have come to feel that way and why I think you should join me in that feeling. Obviously, this has financial implications. Since you are a tech investor, you almost certainly invested in one or more of the companies - like Nvidia (NASDAQ:NVDA), Google (NASDAQ:GOOG) (NASDAQ:GOOGL), and Baidu (NASDAQ:BIDU) - that are profiting from driving the breakneck advances we are seeing in AI base technologies and the myriad of embedded use cases that make the technology so seductive. Indeed, if we look at the entire tech industry ecosystem, from chips through applications and beyond them to their customers that are transforming their business through their use, we can hardly ignore the implications of this present circumstance.
So why? How did we get to this moment? Like me, youve probably been aware of the warnings of well-known luminaries like Elon Musk, Bill Gates, Stephen Hawking and many others, and, like me, you have probably noted their commentary but moved on to consider the next investment opportunity. Personally, being the optimist that I am, I certainly respected those arguments but believed even more strongly that we would innovate ourselves out of the danger zone. So why the change? Two words - one name - Bruce Schneier.
If you have been interested in the fields of cryptology and computer security, you have no doubt heard his name. Now with IBM (NYSE:IBM) as its chief spokesperson on security, he is a noted author and contributor to current thinking on the entire gamut of issues that confront us in this new era of the cloud, IoT, and Internet-based threats to personal privacy and computer system integrity. Mr. Schneiers seminal talk at the recent RSA conference brought it all into focus for me, and I encourage you to watch it. I will briefly recap his argument and then work out some of the consequences that flow from Schneiers argument. So here goes.
Schneiers case begins by identifying the problem - the rise of the cyber-physical system. He points how our day-to-day reality is being subverted as IoT literally stands the world on its head, dematerializing and virtualizing our physical environment. What used to be dumb is now smart. Things that used to be discrete and disconnected are now networked and interconnected in subtle and powerful ways. This is the conceptual linkage that really connected the dots for me. As he puts it in his security blog:
We're building a world-size robot, and we don't even realize it. [] The world-size robot is distributed. It doesn't have a singular body, and parts of it are controlled in different ways by different people. It doesn't have a central brain, and it has nothing even remotely resembling a consciousness. It doesn't have a single goal or focus. It's not even something we deliberately designed. It's something we have inadvertently built out of the everyday objects we live with and take for granted. It is the extension of our computers and networks into the real world. This world-size robot is actually more than the Internet of Things. [] And while it's still not very smart, it'll get smarter. It'll get more powerful and more capable through all the interconnections we're building. It'll also get much more dangerous.
More powerful, indeed. It is at this point where AI and related technologies enter the equation to build a host of managers, agents, bots, natural language interfaces, and other facilities that allow us to leverage the immense scale and reach of our IoT devices - devices that, summed altogether, encompass our physical world and exert enormous power for good and in the wrong hands for evil.
Surely, we can manage this? Well, no, says Schneier - not the way we are going about it now. The problem is, as he cogently points out, our business model for building software and systems is notoriously callus when it comes to security. Our "fail fast fix fast", minimum-market-requirementsfor-version1-shipment protocol is famous for delivering product that comes with a hack me first invitation that is all too often accepted. So whats the difference you may ask? Weve been muddling along with this problem for years. We dig ourselves into trouble, we dig ourselves out. Fail fast, fix fast. Life goes on. Lets go make some money.
Or maybe it doesnt. The IoT phenomenon is leading us headlong into deployment of literally billions of sensors embedded deep into our most personal physical surroundings, connecting us to system entities and actors, nefarious and benign, that now have access to intimate data about our lives. Bad as that is, its not the worst thing. This same access gives these bad actors the potential to control the machines that provide life-sustaining services to us. Its one thing to have your credit card data hacked, its entirely another thing to have a bad actor in control of, say, the power grid, an operating theater robot, your car, or the engine of the airplane you're riding in. Our very lives depend on the integrity of these machines. Do we need to emphasize this point? Fail fast, fix fast does not belong in this world.
So if the prospect of a body-count stat on the next after-action report from some future hack doesnt alarm you, how about this scenario. What if it wasnt a hack? What if it was an unforeseen interaction of otherwise benign AIs that we are relying on to run the system in question? Can we be sure to fully understand the entire capability of an AI that is, say, balancing the second-to-second demands of the power grid?
One thing we can count on - the AI that we are building now will be smarter and more capable tomorrow. How smart is the AI were building? How good is it? Scary good. So lets let Musk answer the question. How smart are these machines were building? [Theyll be] smarter than us. Theyll do everything better than us", he says. So whats the problem? Youre not going to like the answer.
We wont know that the AI has a problem until the AI breaks and we may not know why it broke then. The intrinsic nature of the cognitive software we are building with deep neural nets is that a decision is the product of interactions with thousands and possibly millions of previous decisions from lower levels in the training data, and those decision criteria may well have already been changed as feedback loops communicate learning upstream and down. The system very possibly cant tell us "why". Indeed, the smarter the AI is, the less likely it may be able to answer the why question.
Hard as it is, we really need to understand the scale of the systems we are building. Think about autonomous cars as one, rather small, example. Worldwide the industry has built 88 million cars and light trucks in 2016 and another 26 million medium and heavy trucks. Sometime in the 2025 to 2030 time frame, all of them will be autonomous. With the rise of the driving as a service model, there may not be as many new vehicles being produced, but the numbers will still be huge and fleet sizes will grow every year as the older, self-driving vehicles are replaced. What are the odds that the AI that runs these vehicles performs flawlessly? Can we expect perfection? Our very lives depend on it. God forbid a successful hack into this platform!
Beyond that, what if perfection will kill us? Ultimately, these machines may require our guidance to make moral decisions. Question. You and your spouse are in a car that is in the center lane of the three-lane freeway operating at the 70 mph speed limit. A motorcyclist directly left of you - to the right a family of five in autonomous minivan. Enter a drunk self-driving and old pickup the wrong way at high speed weaving through the three lanes directly in your path. Should your car evade to the left lane and risk the life of the motorcyclist? One would hope our vehicle wouldnt move right and put the family of five at risk. Should it be programmed to conduct a first, do no harm policy which would avoid a swerve into either lane and would simply break as hard as possible in the center lane and hope for the best?
Whatever the scenario, the AIs we develop and deploy, however rich and deep the learning data they have been exposed to, will confront situations that they havent encountered before. In the dire example above and in more mundane conundrums, who ultimately sets the policy that must be adhered to?Should the developer? How about the user (in cases where this is practical)? Or should we have a common policy that must be adhered to by all? For sure, any policy implemented in our driving scenario above will save lives and perform better than any human driver. Even so, in vehicles, airplanes, SCADA systems, chemical plants and myriad other AIs inhabiting devices operating in innately hazardous operating regimes, will it be sufficient to let their in extremis actions be opaque and unknowable? Surely not, but will the AI as developed always gives us the control to change it?
Finally, we must consider a factor that is certainly related to scale but is uniquely and qualitatively different - the network. How freely and ubiquitously should these AIs interconnect? Taken on its face, the decision seems to have been made. The very term, Internet of Things, seems to imply an interconnection policy that is as freewheeling and chaotic as our Internet of people. Is this what We, the People want? Should some AIs - say our nuclear reactors or more generally our SCADA systems - operate with limited or no network connection? Seems likely, but how much further should we go? Who makes the decision?
Beyond such basic questions come the larger issues brought on by the reality of network power. Lets consider the issue of learning and add to that the power of vast network scale in our new cyber physical world. The word seems so simple, so innocuous. How could learning be a bad thing? AI powered IoT systems must be connected to deliver the value we need from them. Our autonomous vehicles, terrestrial and airborne, for example, will be in constant communication with nearby traffic, improving our safety by step-functions.
So how does the fleet learn? Lets take the example from above. Whatever the result, the incident forensics will be sent to the cloud where developers will presumably incorporate the new data in the master learning set. How will the new master be tested? How long? How rigorously? What will be the re-deployment model? Will the new improved version of the AI be proprietary and not shared with the other vehicle manufacturers, leaving their customers at a safety disadvantage? These are questions that demand government purview.
Certainly, there is no unanimous consensus here regarding the threat of AI. Andrew Ng of Baidu/Stanford disagrees that AI will be a threat to us in the foreseeable future. So does Mark Zuckerberg. But these disagreements are only with the overt existential threat - i.e. that a future AI may kill us. More broadly, though, there is very little disagreement that our AI/IoT-powered future poses broad economic and sociopolitical issues that could literally rip our societies apart. What issues? How about the massive loss of jobs and livelihood of perhaps the majority of our population over the course of the next 20 years? As is nicely summarized in this recent NY Times article, AI will almost certainly exacerbate the already difficult problem we have with income disparities. Beyond that, the global consequences of the AI revolution could generate a dangerous dependency dynamic among countries other than the US and China that do not own AI IP.
We could go on and on, but hopefully the issue is clear. Through the development and implementation of increasing capable AI-powered IoT systems, we are embarking upon a voyage into an exciting but dangerous future state which we can barely imagine from our current vantage point. Now is the time to step back and assess where we are and what we need to do going forward. Schneiers prescription for the problem is that the tech industry must get in front of this issue and drive a workable consensus among industry stakeholders and governmental authorities and regulatory bodies about the problem, its causes and potential effects, and most importantly, a reasonable solution to the problems that protects the public while allowing the industry room to innovate and build.
There is no turning back, but we owe it to ourselves and our posterity to do our utmost to get it right. As technologists we are inherently self-interested in protecting and nurturing the opportunity we all have in this exciting new realm. This is natural and understandable. Our singular focus on agility and innovation has brought the world many benefits and will bring many more. But we are not alone and it would be completely irresponsible to insist that we are the only stakeholder in the outcomes we are engineering.
This decision - to engage and attempt to manage the design of the new and evolving regulatory regime - has enormous implications. There is undoubtedly risk. Poor or heavy-handed regulation could well exact a tremendous opportunity cost. One could well imagine a world in which Nvidia's GPU business is severely affected by regulatory inspection and delay, for example. But that is the very reason we need to engage now. The economic leverage that AI provides in every sector of our economy leads us inescapably to economic and wealth-building scenarios beyond anything the world has seen before. As participants and investors, we must do what we can to protect this opportunity to build unprecedented levels of wealth for our country and ourselves. Schneier argues that we are best serving our self-interest by engaging government now rather than burying our heads in the sand waiting for the inevitable backlash that will come when (not if!) these massive systems fail catastrophically in the future.
Schneier has got the right idea. We need to broaden the conversation, lead the search for solutions, and communicate the message to the many non-tech constituencies - including all levels of government - that there is an exciting future ahead but that future must include appropriate regulations that protect the American people and indeed the entire human race.
We wont get a second chance to get this right.
Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.
I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.
Go here to see the original:
Posted in Ai
Comments Off on The ‘Skynet’ Gambit – AI At The Brink – Seeking Alpha
Artificial intelligence system makes its own language, researchers pull the plug – WCVB Boston
Posted: at 7:14 pm
If we're going to create software that can think and speak for itself, we should at least know what it's saying. Right?
That was the conclusion reached by Facebook researchers who recently developed a sophisticated negotiation software that started off speaking English. Two artificial intelligence agents, however, began conversing in their own shorthand that appeared to be gibberish but was perfectly coherent to themselves.
A sample of their conversation:
Bob: I can can I I everything else.
Alice: Balls have zero to me to me to me to me to me to me to me to me to.
Dhruv Batra, a Georgia Tech researcher at Facebook's AI Research (FAIR), told Fast Co. Design "there was no reward" for the agents to stick to English as we know it, and the phenomenon has occurred multiple times before. It is more efficient for the bots, but it becomes difficult for developers to improve and work with the software.
"Agents will drift off understandable language and invent codewords for themselves," Batra said. Like if I say 'the' five times, you interpret that to mean I want five copies of this item. This isnt so different from the way communities of humans create shorthands."
Convenient as it may have been for the bots, Facebook decided to require the AI to speak in understandable English.
"Our interest was having bots who could talk to people," FAIR scientist Mike Lewis said.
In a June 14 post describing the project, FAIR researchers said the project "represents an important step for the research community and bot developers toward creating chatbots that can reason, converse, and negotiate, all key steps in building a personalized digital assistant."
Read the rest here:
Artificial intelligence system makes its own language, researchers pull the plug - WCVB Boston
Posted in Ai
Comments Off on Artificial intelligence system makes its own language, researchers pull the plug – WCVB Boston
Kasparov: ‘Embrace’ the AI revolution – BBC News – BBC News
Posted: at 7:14 pm
BBC News | Kasparov: 'Embrace' the AI revolution - BBC News BBC News AI may destroy jobs but will create many more and increase productivity, said the chess grandmaster. |
Read the original here:
Posted in Ai
Comments Off on Kasparov: ‘Embrace’ the AI revolution – BBC News – BBC News
Top MIT AI Scientist to Elon Musk: Please Simmer Down – Inc.com
Posted: at 7:14 pm
Science fiction futures generally come in two flavors -- utopian and dystopian. Will tech kill routine drudgery and elevate humanity la Star Trek or The Jetsons? Or will innovation be turned against us in some 1984-style nightmare? Or, worse yet, will the robots themselves turn against us (as in the highly entertaining Robopocalypse)?
This isn't just a question for fans of futuristic fiction. Currently two of our smartest minds -- Elon Musk and Mark Zuckerberg -- are in a war of words over whether artificial intelligence is more likely to improve our lives or destroy them.
Musk is the pessimist of the two, warning that proactive regulation is needed to keep doomsday scenarios featuring smarter-than-human A.I.s from becoming a reality. Zuckerberg imagines a rosier future, arguing that premature regulation of A.I. will hold back helpful tech progress.
Each has accused the other of ignorance. Who's right in this battle of the tech titans?
If you're looking for a referee, you could do a lot worse than roboticist Rodney Brooks. He is the founding director of MIT's Computer Science and Artificial Intelligence Lab, and the co-founder of iRobot and Rethink Robotics. In short, he's one of the top minds in the field. So what does he think of the whole Zuckerberg vs. Musk smackdown?
In a wide-ranging interview with TechCrunch, Brooks came down pretty firmly on the side of optimists like Zuckerberg:
There are quite a few people out there who've said that A.I. is an existential threat: Stephen Hawking, astronomer Royal Martin Rees, who has written a book about it, and they share a common thread, in that: they don't work in A.I. themselves. For those who do work in A.I., we know how hard it is to get anything to actually work through product level.
Here's the reason that people -- including Elon-- make this mistake. When we see a person performing a task very well, we understand the competence [involved]. And I think they apply the same model to machine learning. [But they shouldn't.] When people saw DeepMind's AlphaGo beat the Korean champion and then beat the Chinese Go champion, they thought, 'Oh my god, this machine is so smart, it can do just about anything!' But I was at DeepMind in London about three weeks ago and [they admitted that things could easily have gone very wrong].
Brooks also argues against Musk's idea of early regulation of A.I., saying it's unclear exactly what should be prohibited at this stage. In fact, the only form of A.I. he would like to see regulated is self-driving cars-- such as those being developed by Musk's Tesla -- which Brooks claims present imminent and very real practical problems. (For example, should a 14-year-old be able to override and "drive" an obviously malfunctioning self-driving car?)
Are you more excited or worried about the future of artificial intelligence?
Follow this link:
Top MIT AI Scientist to Elon Musk: Please Simmer Down - Inc.com
Posted in Ai
Comments Off on Top MIT AI Scientist to Elon Musk: Please Simmer Down – Inc.com
Google Has Started Adding Imagination to Its DeepMind AI – Futurism – Futurism
Posted: at 7:14 pm
Advanced AI
Researchers have started developing artificial intelligence with imagination AI that can reason through decisions and make plans for the future, without being bound by human instructions.
Another way to put it would be imagining the consequences of actions before taking them, something we take for granted but which is much harder for robots to do.
The team working at Google-owned lab DeepMind says this ability is going to be crucial in developing AI algorithms for the future, allowing systems to better adapt to changing conditions that they havent been specifically programmed for. Insert your usual fears of a robot uprising here.
When placing a glass on the edge of a table, for example, we will likely pause to consider how stable it is and whether it might fall, explain the researchersin a blog post. On the basis of that imagined consequence we might readjust the glass to prevent it from falling and breaking.
If our algorithms are to develop equally sophisticated behaviours, they too must have the capability to imagine and reason about the future. Beyond that they must be able to construct a plan using this knowledge.
Weve already seen a version of this forward planning inthe Go victoriesthat DeepMinds bots have scored over human opponents recently, as the AI works out the future outcomes that will result from its current actions.
The rules of the real world are much more varied and complex than the rules of Go though, which is why the team has been working on a system that operates on another level.
To do this, the researchers combined several existing AI approaches together, including reinforcement learning (learning through trial and error) and deep learning (learning through processing vast amounts of data in a similar way to the human brain).
What they ended up with is a system that mixes trial-and-error with simulation capabilities, so bots can learn about their environment then think before they act.
One of the ways they tested the new algorithms was with a 1980s video game calledSokoban, in which players have to push crates around to solve puzzles. Some moves can make the level unsolvable, so advanced planning is needed, and the AI wasnt given the rules of the game beforehand.
The researchers found their new imaginative AI solved 85 percent of the levels it was given, compared with 60 percent for AI agents using older approaches.
The imagination-augmented agents outperform the imagination-less baselines considerably,say the researchers. They learn with less experience and are able to deal with the imperfections in modelling the environment.
The team noted a number of improvements in the new bots: they could handle gaps in their knowledge better, they were better at picking out useful information for their simulations, and they could learn different strategies to make plans with.
Its not just advance planning its advance planning with extra creativity, so potential future actions can be combined together or mixed up in different ways in order to identify the most promising routes forward.
Despite the success of DeepMinds testing, its still early days for the technology, and these games are still a long way from representing the complexity of the real world. Still, its a promising start in developing AI that wont put a glass of water on a table if its likely to spill over, plus all kinds of other, more useful scenarios.
Further analysis and consideration is required to provide scalable solutions to rich model-based agents that can use their imaginations to reason about and plan for the future,conclude the researchers.
The researchers also created a video of the AI in action, which you can see below:
You can read the two papers published to the pre-print website arXiv.orghereandhere.
Continued here:
Google Has Started Adding Imagination to Its DeepMind AI - Futurism - Futurism
Posted in Ai
Comments Off on Google Has Started Adding Imagination to Its DeepMind AI – Futurism – Futurism
Elon Musk and Mark Zuckerberg Are Arguing About AI — But They’re Both Missing the Point – Entrepreneur
Posted: July 28, 2017 at 7:15 pm
Free Webinar | August 16th
Find out how to optimize your website to give your customers experiences that will have the biggest ROI for your business. Register Now
In Silicon Valley this week, a debate about the potential dangers (or lack thereof) when it comes to artificial intelligencehas flared upbetween two tech billionaires.
Facebook CEO Mark Zuckerberg thinks that AI is going to make our lives better in the future,while SpaceX CEO Elon Musk believes that AI a fundamental risk to the existence of human civilization.
Whos right?
Related: Elon Musk Says Mark Zuckerberg's Understanding of AI Is 'Limited' After the Facebook CEO Called His Warnings 'Irresponsible'
Theyre both right, but theyre also both missing the point. The dangerous aspect of AI will always come from people and their use of it, not from the technology itself. Similar to advances in nuclear fusion, almost any kind of technological developments can be weaponized and used to cause damage if in the wrong hands. The regulation of machine intelligence advancements will play a central role in whether Musks doomsday prediction becomes a reality.
It would be wrong to say that Musk is hesitant to embrace the technology since all of this companies are direct beneficiaries of the advances in machine learning. Take Tesla for example, where self-driving capability is one of the biggest value adds for its cars. Musk himself even believes that one day it will be safer to populate roads with AI drivers rather than human ones, though publicly he hopes that society will not ban human drivers in the future in an effort to save us from human error.
What Musk is really pushing for here by being wary of AI technology is a more advanced hypothetical framework that we as a society should use to have more awareness regarding the threats that AI brings. Artificial General Intelligence (AGI), the kind that will make decisions on its own without any interference or guidance from humans, is still very far away from how things work today. The AGI that we see in the movies where robots take over the planet and destroy humanity is very different from the narrow AI that we use and iterate on within the industry now. In Zuckerbergs view, the doomsday conversation that Musk has sparked is a very exaggerated way of projecting how the future of our technology advancements would look like.
Related: The Future of Productivity: AI and Machine Learning
While there is not much discussion in our government about apocalypse scenarios, there is definitely a conversation happening about preventing the potentially harmful impacts on society from artificial intelligence. White House recently released a couple of reports on the future of artificial intelligence and on the economic effects it causes. The focus of these reports is on the future of work, job marketsand research on increasing inequality that machine intelligence may bring.
There is also an attempt to tackle a very important issue of explainability when it comes to understanding actions that machine intelligence does and decisions it presents to us. For example, DARPA (Defense Advanced Research Projects Agency), an agency within the U.S. Department of Defense, is funneling billions of dollars into projects that would pilot vehicles and aircraft, identify targets and even eliminate them on autopilot. If you thought the use of drone warfare was controversial, AI warfare will be even more so. Thats why here its even more important, maybe even more than in any other field, to be mindful of the results AI presents.
Explainable AI (XAI), the initiative funded by DARPA, aims to create a suite of machine learning techniques that produce more explainable results to human operators and still maintain a high level of learning performance. The other goal of XAI is to enable human users to understand, appropriately trust and effectively manage the emerging generation of artificially intelligent partners.
Related: Would You Fly on an AI-Backed Plane Without a Pilot?
The XAI initiative can also help the government tackle the problem of ethics with more transparency. Sometimes developers of software have conscious or unconscious biases that eventually are built into an algorithm -- the wayNikon camera became internet famous for detecting someone blinking when pointed at the face of an Asian personor HP computers were proclaimed racist for not detecting black faces on the camera. Even developers with the best intentions can inadvertently produce systems with biased results, which is why, as the White House report states,AI needs good data. If the data is incomplete or biased, AI can exacerbate problems of bias.
Even with the positive use cases, the data bias can cause a lot of serious harm to society. Take Chinas recent initiative to use machine intelligence to predict and prevent crime. Of course, it makes sense to deploy complex algorithms that can spot a terrorist and prevent crime, but a lot of bad scenarios can happen if there is an existing bias in the training data for those algorithms.
It important to note that most of these risks already exist in our lives in some form or another, like when patients are misdiagnosed with cancer and not treated accordingly by doctors or when police officers make intuitive decisions under chaotic conditions. The scale and lack of explainability of machine intelligence will magnify our exposure to these risks and raise a lot of uncomfortable ethical questions like, who is responsible for a wrong prescription by an automated diagnosing AI? A doctor? A developer? Training data provider? This is why complex regulation will be needed to help navigate these issues and provide a framework for resolving the uncomfortable scenarios that AI will inevitably bring into society.
Artur Kiulian, M.S.A.I., is a partner at Colab, a Los Angeles-based venture studio that helps startups build technology products using the benefits of machine learning. An expert in artificial intelligence, Kiulian is the author of Robot is...
See the article here:
Posted in Ai
Comments Off on Elon Musk and Mark Zuckerberg Are Arguing About AI — But They’re Both Missing the Point – Entrepreneur
Baidu Curbs Spending on Food Delivery to Prep for AI – AdAge.com
Posted: at 7:15 pm
Credit: Bloomberg News
Baidu's move to slash spending on services from food delivery to travel helped the search giant soundly beat estimates, as it recovers from Chinese government restrictions and prepares to invest more in artificial intelligence.
China's largest search engine reported a better-than- projected 83% leap in net income after both general and traffic acquisition costs shrank. It's also considering a change in its operating structure to allow a rapidly growing finance unit -- a source of concern to Moody's Investors Service, among others -- operate more independently.
Baidu forecast revenue for the third quarter of 23.1 billion yuan ($3.4 billion) to 23.8 billion yuan, versus the 23 billion yuan average of analysts' estimates compiled by Bloomberg. Net income soared to 4.4 billion yuan in the June quarter, sharply outpacing projections for 2.9 billion yuan.
The beat comes at a vital time for the company. It's asking investors to back investments in content and artificial intelligence projects such as autonomous driving, even though expensive forays into new businesses such as food delivery have failed to deliver market leadership. Group President Qi Lu has said the search giant can beat Alphabet Inc. at driverless cars within three to five years thanks to its Apollo program, which opens the technology up to partners.
"Our focus is to accelerate the commercialization of AI technologies," he told analysts on an earnings call.
Its U.S.-listed shares jumped 7.5% in extended trading. Baidu has cut back on costly subsidies and discounts for its struggling travel and food delivery units, part of an expansion into so-called online-to-offline or on-demand services.
But the company remains committed to spending big on TV and movie rights for a Netflix-like streaming video service called iQiyi, which has over 30 million paying subscribers. It also plans to buy content for a news aggregation service that relies on AI to target ads and content at 100 million daily active users.
"Marketing spending for O2O has come down quite visibly," said Kirk Boodry, an analyst with New Street Research. "While the numbers for the quarter looked good, we think the costs for their content this year are probably going to be back-loaded."
Revenue rose for a second straight quarter. Sales jumped 14% to 20.9 billion yuan in the June quarter versus projections for 20.7 billion yuan.
Online marketing revenue rose 5.6%, though the number of customers was down more than 20%. Baidu's ad business was hit hard last year after the government imposed harsher regulations and changed the tax status of a key product. The entire customer base had to re-register with stricter conditions and many chose to switch platforms, reducing the pool of advertisers. As a result, the company reported its first annual earnings decline since its 2005 initial public offering.
Baidu is now counting on AI projects to offset slowing growth in its core business of selling internet ads placed next to search results. One example is its financial services group, which lends money to students and others using the technology to determine credit risks. The push led Fitch Ratings and Moody's to place the company on review for a potential downgrade - both ratings agencies said the risks of such businesses were very different from its traditional strength as a search engine.
Baidu is now in the early stages of considering the structure of its finance arm. While Baidu's Lu didn't provide specifics, it may be trying to reduce risk while helping it get financial licenses available only to domestically controlled companies. Alibaba Group Holding Ltd. and JD.com Inc. have cited similar reasons when considering spinoffs of their own financial services businesses.
"We are beginning the process of working out a future operating structure that allows FSG to operate more independently to expand into areas that may require domestic licenses and enable stronger long term growth," Lu said.
-- Bloomberg News
Read the original post:
Baidu Curbs Spending on Food Delivery to Prep for AI - AdAge.com
Posted in Ai
Comments Off on Baidu Curbs Spending on Food Delivery to Prep for AI – AdAge.com
A ‘potentially deadly’ mushroom-identifying app highlights the dangers of bad AI – The Verge
Posted: at 7:15 pm
Theres a saying in the mushroom-picking community that all mushrooms are edible but some mushrooms are only edible once.
Thats why, when news spread on Twitter of an app that used revolutionary AI to identify mushrooms with a single picture, mycologists and fungi-foragers were worried. They called it potentially deadly, and said that if people used it to try and identify edible mushrooms, they could end up very sick, or even dead.
Part of the problem, explains Colin Davidson, a mushroom forager with a PhD in microbiology, is that you cant identify a mushroom just by looking at it. The most common mushroom near me is something called the yellow stainer, he told The Verge, and it looks just like an edible horse mushroom from above and the side. But if you eat a yellow stainer theres a chance youll be violently ill or even hospitalized. You need to pick it up and scratch it or smell it to actually tell what it is, explains Davidson. It will bruise bright yellow or it will smell carbolic.
And this is only one example. There are plenty of edible mushrooms with toxic lookalikes, and when identifying them you need to study multiple angles to find features like gills and rings, while considering things like whether recent rainfall might have discolored the cap or not. Davidson adds that there are plenty of mushrooms that live up to their names, like the destroying angel or the deaths cap.
but then your organs will start failing.
One eighth of a death cap can kill you, he says. But the worst part is, youll feel sick for a while, then you might feel better and get on with your day, but then your organs will start failing. Its really horrible.
The app in question was developed by Silicon Valley designer Nicholas Sheriff, who says it was only ever intended to be used as a rough guide to mushrooms. When The Verge reached out to Sheriff to ask him about the apps safety and how it works, he said the app wasnt built for mushroom hunters, it was for moms in their backyard trying to ID mushrooms. Sheriff added that hes currently pivoting to turn the app into a platform for chefs to buy and sell truffles.
When we tried the iOS-only software this morning, we found that Sheriff had changed its preview picture on the App Store to say identify truffles instantly with just a pic. However, the name of the app remains Mushroom Instant Mushroom Plants Identification, and the description contains the same claim that so worried Davidson and others: Simply point your phone at any mushroom and snap a pic, our revolutionary AI will instantly identify mushrooms, flowers, and even birds.
In our own tests, though, the app was unable to identify either common button or chestnut mushrooms, and crashed repeatedly. Motherboard also tried the app and found it couldnt identify a shiitake mushroom. Sheriff says he is planning on adding more data to improve the apps precision, and tells The Verge that his intention was never to try and replace experts, but supplement their expertise.
claims about revolutionary AI can be dangerous
And, of course, if you search the iOS or Android app stores, youll find plenty of mushroom identifying apps, most of which are catalogues of pictures and text. Whats different about this one, is that it claims to use machine vision and revolutionary AI to deliver its results terms that seem specifically chosen to give people a false sense of confidence. If youre selling an app to identify flowers, then this sort of language is merely disingenuous; when its mushrooms youre spotting, it becomes potentially dangerous.
As Davidson says: Im absolutely enthralled by the idea of it. I would love to be able to go into a field and point my phone at a mushroom and find out what it is. But I would want quite a lot of convincing that it would be able to work. So far, were not convinced.
See the rest here:
A 'potentially deadly' mushroom-identifying app highlights the dangers of bad AI - The Verge
Posted in Ai
Comments Off on A ‘potentially deadly’ mushroom-identifying app highlights the dangers of bad AI – The Verge
Face it, AI is better at data-analysis than humans – TNW
Posted: at 7:15 pm
Its time we stopped pretending that were computers and let the machines do their jobs.Anodot, a real-time analytics company, is using advanced machine-learning algorithms to overcome the limitations that humans bring to data analysis.
AI can chew up all your data and spit out more answers than youve got questions for, and the e-commerce businesses that dont integrate machine-learning into data analysis will lose money.
Weve all been there before: youve just launched a brand new product after spending millions on the development cycle and, for no discernible reason, your online marketplace crashes in a major market and you dont find out for several hours. All those sales: lost, like an unsaved file.
Okay, maybe we havent all been there but weve definitely been on the other end. Error messages on checkouts, product listings that lead nowhere, and worst of all shortages. If we dont get what we want when we want it well get it somewhere else. Anomalies in the market, and the ability to respond to them, can be the difference between profits and shutters to any business.
Data analysis isnt a popular water-cooler topic anywhere, presumably even at companies that specialize in it. Rebecca Herson, Vice President of Marketing for Anodot, explains the need for AI in the field:
Theres just so much data being generated, theres no way for a human to go through it all. Sometimes, when we analyze historical data for businesses were introducing Anodot to, they discern things they never knew where happening. Obviously businesses know if servers go down, but if you have a funnel leaking in a few different places it can be difficult to find all the problems.
The concern isnt just lost sales; theres also product-supply disruption and customer satisfaction to worry about. In numerous case studies Anodot found an estimated 80 percent of the anomalies its machine-learning software uncovered were negative factors, as opposed to positive opportunities. These companies were losing money because they werent aware of specific problems.
Weve seen data-analysis software before, but Anodots use of machine-learning is an entirely different application. Anodot is using unsupervised AI, which accesses deep-learning, to autonomously find new ways to categorize and understand data.
With customers like Microsoft, Google Waze, and Comcast, it would appear as though this software is prohibitvely complex and designed for the tech-elite, but Herson explains:
This is something that, while data scientist is the new sexy profession, you wont need one to use this. Its got the data scientist baked-in. If you have one, they can leverage this to provide immediate results. An e-commerce strategist can leverage the data and provide real-time analysis. This isnt something that requires a dedicated staff, your existing analysts can use this.
While we ponder the future of AI, companies like Anodot are applying it in all the right ways (see: non-lethal and money-saving). Automating data-analysis isnt quite as thrilling as an AI that can write speeches for the President, but its far more useful.
Read next: iTunes oversight practically confirms 4K Apple TV is imminent
Read more here:
Posted in Ai
Comments Off on Face it, AI is better at data-analysis than humans – TNW
Google creates AI that can make its own plans and envisage consequences of its actions – The Independent
Posted: July 27, 2017 at 10:27 am
Designed by Pierpaolo Lazzarini from Italian company Jet Capsule. The I.F.O. is fuelled by eight electric engines, which is able to push the flying object to an estimated top speed of about 120mph.
Jet Capsule/Cover Images
A humanoid robot gestures during a demo at a stall in the Indian Machine Tools Expo, IMTEX/Tooltech 2017 held in Bangalore
Getty Images
A humanoid robot gestures during a demo at a stall in the Indian Machine Tools Expo, IMTEX/Tooltech 2017 held in Bangalore
Getty Images
Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea
Jung Yeon-Je/AFP/Getty Images
Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea
Jung Yeon-Je/AFP/Getty Images
The giant human-like robot bears a striking resemblance to the military robots starring in the movie 'Avatar' and is claimed as a world first by its creators from a South Korean robotic company
Jung Yeon-Je/AFP/Getty Images
Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea
Jung Yeon-Je/AFP/Getty Images
Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi
Rex
Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi and Kaptain Rock playing one string light saber guitar perform jam session
Rex
A test line of a new energy suspension railway resembling the giant panda is seen in Chengdu, Sichuan Province, China
Reuters
A test line of a new energy suspension railway, resembling a giant panda, is seen in Chengdu, Sichuan Province, China
Reuters
A concept car by Trumpchi from GAC Group is shown at the International Automobile Exhibition in Guangzhou, China
Rex
A Mirai fuel cell vehicle by Toyota is displayed at the International Automobile Exhibition in Guangzhou, China
Reuters
A visitor tries a Nissan VR experience at the International Automobile Exhibition in Guangzhou, China
Reuters
A man looks at an exhibit entitled 'Mimus' a giant industrial robot which has been reprogrammed to interact with humans during a photocall at the new Design Museum in South Kensington, London
Getty
A new Israeli Da-Vinci unmanned aerial vehicle manufactured by Elbit Systems is displayed during the 4th International conference on Home Land Security and Cyber in the Israeli coastal city of Tel Aviv
Getty
Electrification Guru Dr. Wolfgang Ziebart talks about the electric Jaguar I-PACE concept SUV before it was unveiled before the Los Angeles Auto Show in Los Angeles, California, U.S
Reuters
The Jaguar I-PACE Concept car is the start of a new era for Jaguar. This is a production preview of the Jaguar I-PACE, which will be revealed next year and on the road in 2018
AP
Japan's On-Art Corp's CEO Kazuya Kanemaru poses with his company's eight metre tall dinosaur-shaped mechanical suit robot 'TRX03' and other robots during a demonstration in Tokyo, Japan
Reuters
Japan's On-Art Corp's eight metre tall dinosaur-shaped mechanical suit robot 'TRX03'
Reuters
Japan's On-Art Corp's eight metre tall dinosaur-shaped mechanical suit robot 'TRX03' performs during its unveiling in Tokyo, Japan
Reuters
Singulato Motors co-founder and CEO Shen Haiyin poses in his company's concept car Tigercar P0 at a workshop in Beijing, China
Reuters
The interior of Singulato Motors' concept car Tigercar P0 at a workshop in Beijing, China
Reuters
Singulato Motors' concept car Tigercar P0
Reuters
A picture shows Singulato Motors' concept car Tigercar P0 at a workshop in Beijing, China
Reuters
Connected company president Shigeki Tomoyama addresses a press briefing as he elaborates on Toyota's "connected strategy" in Tokyo. The Connected company is a part of seven Toyota in-house companies that was created in April 2016
Getty
A Toyota Motors employee demonstrates a smartphone app with the company's pocket plug-in hybrid (PHV) service on the cockpit of the latest Prius hybrid vehicle during Toyota's "connected strategy" press briefing in Tokyo
Getty
An exhibitor charges the battery cells of AnyWalker, an ultra-mobile chasis robot which is able to move in any kind of environment during Singapore International Robo Expo
Getty
A robot with a touch-screen information apps stroll down the pavillon at the Singapore International Robo Expo
Getty
An exhibitor demonstrates the AnyWalker, an ultra-mobile chasis robot which is able to move in any kind of environment during Singapore International Robo Expo
Getty
Robotic fishes swim in a water glass tank displayed at the Korea pavillon during Singapore International Robo Expo
Getty
An employee shows a Samsung Electronics' Gear S3 Classic during Korea Electronics Show 2016 in Seoul, South Korea
Reuters
Visitors experience Samsung Electronics' Gear VR during the Korea Electronics Grand Fair at an exhibition hall in Seoul, South Korea
Getty
Amy Rimmer, Research Engineer at Jaguar Land Rover, demonstrates the car manufacturer's Advanced Highway Assist in a Range Rover, which drives the vehicle, overtakes and can detect vehicles in the blind spot, during the first demonstrations of the UK Autodrive Project at HORIBA MIRA Proving Ground in Nuneaton, Warwickshire
PA wire
Chris Burbridge, Autonomous Driving Software Engineer for Tata Motors European Technical Centre, demonstrates the car manufacturer's GLOSA V2X functionality, which is connected to the traffic lights and shares information with the driver, during the first demonstrations of the UK Autodrive Project at HORIBA MIRA Proving Ground in Nuneaton, Warwickshire
PA wire
Ford EEBL Emergency Electronic Brake Lights is demonstrated during the first demonstrations of the UK Autodrive Project at HORIBA MIRA Proving Ground in Nuneaton, Warwickshire
PA
Full-scale model of 'Kibo' on display at the Space Dome exhibition hall of the Japan Aerospace Exploration Agency (JAXA) Tsukuba Space Center, in Tsukuba, north-east of Tokyo, Japan
EPA
Miniatures on display at the Space Dome exhibition hall of the Japan Aerospace Exploration Agency (JAXA) Tsukuba Space Center, in Tsukuba, north-east of Tokyo, Japan. In its facilities, JAXA develop satellites and analyse their observation data, train astronauts for utilization in the Japanese Experiment Module 'Kibo' of the International Space Station (ISS) and develop launch vehicles
EPA
The robot developed by Seed Solutions sings and dances to the music during the Japan Robot Week 2016 at Tokyo Big Sight. At this biennial event, the participating companies exhibit their latest service robotic technologies and components
Getty
The robot developed by Seed Solutions sings and dances to music during the Japan Robot Week 2016 at Tokyo Big Sight
Getty
Government and industry are working together on a robot-like autopilot system that could eliminate the need for a second human pilot in the cockpit
AP
Aurora Flight Sciences' technicians work on an Aircrew Labor In-Cockpit Automantion System (ALIAS) device in the firm's Centaur aircraft at Manassas Airport in Manassas, Va.
AP
Stefan Schwart and Udo Klingenberg preparing a self-built flight simulator to land at Hong Kong airport, from Rostock, Germany
EPA
Link:
Posted in Ai
Comments Off on Google creates AI that can make its own plans and envisage consequences of its actions – The Independent