The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
How AI will teach us to have more empathy – VentureBeat
Posted: July 23, 2017 at 1:10 am
John, did you remember its your anniversary?
This message did not appear in my inbox and Alexa didnt say it aloud the other day. I do have reminders on Facebook , of course. Yet, there isnt an AI powering my life decisions yet. Some day, AI will become more proactive, assistive, and much smarter. In the long run, it will teach us to have more empathy the great irony of the upcoming machine learning age.
You can picture how this might work. In 2033, you walk into a meeting and an AI that connects to your synapses and scans around the room, ala Google Glass without the hardware. Because science has advanced so much, the AI knows how you are feeling. Youre tense. The AI uses facial recognition to determine who is there and youre history with each person. The guy in accounting is a jerk, and you hardly know the marketing team.
You sit down at the table, and you glance at an HUD that shows you the bio for a couple of the marketing people. You see a note about the guy in accounting. He sent an email out about his sick Labrador the week before. How is your dog doing? you ask. Based on their bios, you realize the marketing folks are young bucks just starting their careers. You relax a little.
I like the idea of an AI becoming more aware of our lives of the people around us and the circumstances. Its more than remembering an anniversary. We can use an AI to augment any activity sales and marketing, product briefings, graphic design. An AI can help us understand more about the people on our team including coworkers and clients. It could help us in our personal lives with family members and friends. It could help at formal situations.
Yes, it sounds a bit like an episode of Black Mirror. When the AI makes a mistake and tells you someone had a family member who died but tells you the wrong name, you will look foolish. And that will happen. I also see a major advantage in having an AI work a bit like a GPS. Today, theres a lot less stress involved in driving in an unfamiliar place. (Theres also the problem of people not actually knowing how to read a map and relying too much on a GPS.) An AI could help us see another persons point of view their background and experiences, their opinions. An AI could give us more empathy because it can provide more contextual information.
This also sounds like the movie Her where there is a talking voice. I see the AI as knowing more about our lives and our surroundings, then interacting with the devices we use. The AI knows about our car and our driving habits, knows when we normally wake-up. It will let people know when were late to a meeting, and send us information that is helpful for social situations. Well use an AI through a text interface, in a car, and on our computers.
This AI wont provide a constant stream of information, but the right amount the amount it knows we need to reduce stress or understand people on a deeper level. John likes coffee, you should offer to buy him some is one example. Janes daughter had a soccer game last night, ask how it went. This kind of AI will help in ways other than just providing information. It will be more like a subtext to help us communicate better and augment our daily activities.
Someday, maybe two decades from now, well remember when an AI was just used for parsing information. Well wonder how we ever used AI without the human element.
More:
Posted in Ai
Comments Off on How AI will teach us to have more empathy – VentureBeat
Why AI needs a human touch – VentureBeat
Posted: at 1:10 am
Elon Musk caused a media stir recently. Not for his innovative technologies or promises to commercialize space travel. In front of a meeting of the National Governors Association, the Tesla CEO warned attendees that [Artificial Intelligence] AI is a fundamental existential risk for human civilization. Based on his observations, Musk cautioned that AI is the scariest problem.
Its not the first time hes sounded this alarm. He madeheadlines with it a few years ago. In fact, Musk is so concerned, he suggested something almost unthinkable for most tech leaders: government regulation.
What AI needs, in fact, is a human touch.
AI is most certainly here as a fixture in our lives from suggesting news articles we might like to Siri on your phone to credit card fraud detection to autonomous-driving capabilities in cars. But are we having the right conversations about its impact? There are conversations about the kinds of job loss that might come from future technologies like self-driving cars or the blue-collar jobs that might be lost to increasingly automated processes. But do we really need to look far into the future to see its impact and its potential for harm? And are these impacts only relegated to entry-level jobs in transportation or manufacturing?
The reality is much more complicated, widespread, and immediate than our current public dialogue or Musks diatribe betray.
An immediate opportunity and also a risk is that first variations of AI are destined to repeat the issues that already exist. But what happens when you need to move beyond a historical mold?
When managed by and for people, AI creates new opportunities for ingenuity.
For example, many mid- to large-size companies use AI in hiring today to source candidates using technologies that search databases like LinkedIn. These sourcing methods typically use algorithms based on current staff and will, therefore, only identify people who look a lot like the current employees. Instead of moving an organization forward and finding people who complement current capabilities, this will instead build a culture of sameness and homogeneity that does not anticipate future needs.
As these AI sourcing methods become pervasive, HR and talent acquisition professionals wonder what this means for the industry and for their jobs. Will we still need recruiters now that we have AI to cover many hiring responsibilities?
The answer is a resounding yes.
Where AI algorithms encourage sameness and disqualify huge swaths of potentially qualified candidates simply because they dont look like current employees, humans can identify the gaps in capabilities and personality and use that to promote more innovative hiring. Companies are looking for new and different approaches, creative solutions, and new talents. To evolve they need to anticipate future directions and adapt to meet those challenges. They need a diverse range of problem solvers, and they need new and varied skills theyve never hired before. AI cannot deliver those candidates. People can.
While AI can be incredibly useful, the biggest harm it can inflict is if used without human input. We need humans to think creatively and abstractly about the problems we face and to devise new and innovative strategies, to test out different approaches, and to look to the future for upcoming challenges and opportunities. We need to be sure we arent using algorithms to replicate a past that does not meet the needs of the future.
Laura Mather is thefounder and CEO of Talent Sonar.
Originally posted here:
Posted in Ai
Comments Off on Why AI needs a human touch – VentureBeat
Google’s AI Fight Club Will Train Systems to Defend Against Future Cyberattacks – Futurism
Posted: July 22, 2017 at 8:12 am
In BriefGoogle Brain and data science platform Kaggle have announcedan "AI Fight Club" to train machine learning systems on how tocombat malicious AI. As computer systems become smarter,cyberattacks also become tougher to defend against, and thiscontest could help illuminate unforeseen vulnerabilities. Reinforcing AI Systems
When artificial intelligence (AI) is discussed today, most people are referring to machine learning algorithmsor deep learning systems. While AI hasadvanced significantly over the years, the principle behind these technologies remains the same. Someone trains a system to receivecertain data and asks it to produce a specified outcome its up to the machine to develop its own algorithm to reach this outcome.
Alas, while weve been able to create some very smartsystems, they are not foolproof. Yet.
Data science competition platform Kaggle wants to prepare AI systems for super-smart cyberattacks, and theyre doing so by pitting AI against AIin acontest dubbed the Competition on Adversarial Attacks and Defenses. The battle is organized by Google Brain and will be part of the Neural Information Processing Systems (NIPS) Foundations 2017 competition track later this year.
This AI fight club will feature three adversarial challenges. The first (non-targeted adversarial attack) involves getting algorithms to confuse a machine learning system so it wont work properly. Another battle (targeted adversarial attack) requires training one AI to force another to classify data incorrectly. The third challenge (defense against adversarial attacks) focuses on beefing up a smart systems defenses.
Its a brilliant idea to catalyze research into both fooling deep neural networks and designing deep neural networks that cannot be fooled,Jeff Clune, a University of Wyoming assistant professor whose own work involves studying the limits of machine learning systems, told the MIT Technology Review.
AI is actually more pervasive now than most people think, and as computer systems have become more advanced, the use of machine learning algorithms has become more common. The problem is that the same smart technology can be used to undermine these systems.
Computer security is definitely moving toward machine learning, Google Brain researcher Ian Goodfellow told theMIT Technology Review. The bad guys will be using machine learning to automate their attacks, and we will be using machine learning to defend.
Training AI to fight malicious AI is the best way to prepare for these attacks, but thats easier said than done.Adversarial machine learning is more difficult to study than conventional machine learning, explained Goodfellow. Its hard to tell if your attack is strong or if your defense is actually weak.
The unpredictability of AI is one of the reasons some,including serial entrepreneur Elon Musk,are concerned that the tech may prove malicious in the future. They suggest that AI development be carefully monitored and regulated, but ultimately, itsthe people behind these systemsand not the systems themselves that present the true threat.
In an effort to get ahead of the problem, the Institute of Electrical and Electronics Engineers has createdguidelines for ethical AI, and groups like the Partnership on AI have also set up standards. Kaggles contest could illuminate new AI vulnerabilities that must be accounted for in future regulations, and by continuing to approach AI development cautiously, we can do more to ensure that the tech isnt used for nefarious means in the future.
Read more:
Google's AI Fight Club Will Train Systems to Defend Against Future Cyberattacks - Futurism
Posted in Ai
Comments Off on Google’s AI Fight Club Will Train Systems to Defend Against Future Cyberattacks – Futurism
How AI Is Already Changing Business – Harvard Business Review
Posted: July 21, 2017 at 12:16 pm
Erik Brynjolfsson, MIT Sloan School professor, explains how rapid advances in machine learning are presenting new opportunities for businesses. He breaks down how the technology works and what it can and cant do (yet). He also discusses the potential impact of AI on the economy, how workforces will interact with it in the future, and suggests managers start experimenting now. Brynjolfsson is the co-author, with Andrew McAfee, of the HBR Big Idea article, The Business of Artificial Intelligence. Theyre also the co-authors of the new book, Machine, Platform, Crowd: Harnessing Our Digital Future.
Download this podcast
SARAH GREEN CARMICHAEL: Welcome to the HBR IdeaCast from Harvard Business Review. Im Sarah Green Carmichael.
Its a pretty sad photo when you look at it. A robot, just over a meter tall and shaped kind of like a pudgy rocket ship, laying on its side in a shallow pool in the courtyard of a Washington, D.C. office building. Workers human ones stand around, trying to figure out how to rescue it.
The security robot had just been on the job for a few days when the mishap occurred. One entrepreneur who works in the office complex wrote: We were promised flying cars. Instead we got suicidal robots.
For many people online, the snapshot symbolized something about the autonomous future that awaits. Robots are coming, and computers can do all kinds of new work for us. Cars can drive themselves. For some people this is exciting, but there is also clearly fear out there about dystopia. Tesla CEO Elon Musk calls artificial intelligence an existential threat.
But our guest on the show today is cautiously optimistic. Hes been watching how businesses are using artificial intelligence and how advances in machine learning will change how we work. Erik Brynjolfsson teaches at MIT Sloan School and runs the MIT Initiative on the Digital Economy. And hes the co-author with Andrew McAfee of the new HBR article, The Business of Artificial Intelligence.
Erik, thanks for talking with the HBR IdeaCast.
ERIK BRYNJOLFSSON: Its a pleasure.
SARAH GREEN CARMICHAEL: Why are you cautiously optimistic about the future of AI?
ERIK BRYNJOLFSON: Well actually that story you told about the robot that had trouble was a great lead in because in many ways it epitomizes some of the strengths and weaknesses of robots today. Machines are quite powerful and in many ways, theyre superhuman you know just as a calculator can do arithmetic a lot better than me, were having artificial intelligence thats able to do all sorts of functions in terms of recognizing different kinds of cancer images, or now getting superhuman even in speech recognition in some applications but theyre also quite narrow. They dont have general intelligence the way people do. And thats why partnerships of humans and machines are often going to be the most successful in business.
SARAH GREEN CARMICHAEL: You know its funny, cause when you talk about image recognition I think about a fantastic image in your article that is called Puppy or Muffin. I was amazed at how much puppies and muffins look alike in sort of even more amazed that robots can tell them apart.
ERIK BRYNJOLFSSON: Yeah, its a funny image. It always gets a laugh and encourage people to go take a look at it. And there are lots of things that humans are pretty good at in distinguishing different kinds of images. And for a long time, machines were nowhere near as good as recently as seven, eight years ago, machines made about a 30 percent error rate on image net, this big database that Fei Fei Li created of over 10 million images. Now machines are down less, you know, less than 5%, 3-4% depending on how its set up. Humans still have about a 5% error rate. Sometimes they get those puppies and nothings wrong. Be careful what you reach for next time youre at that breakfast bar. But thats a good example.
The reason its improved so much in the past few years is because of this new approach using deep neural nets thats gotten much more powerful for image recognition and really all sorts of different applications. I think thats a big reason why theres so much excitement these days.
SARAH GREEN CARMICHAEL: Yeah, its one of those things where we all kind of like to make fun of machines that get it wrong but also its sort of terrifying when they get it right.
ERIK BRYNJOLFSSON: Yeah. Machines are not going to be perfect drivers, theyre not going to be perfect at making credit decisions that are going to be perfect at distinguishing you know muffins and puppies. And so, we have to make sure we build systems that are robust to those imperfections. But the point we make an article, Andy and I point out that you know humans arent perfect at any of those tasks either. And so, the benchmark for most entrepreneurs and managers is: whos going to be better for solving this particular task or better yet can we create a system that combines the strengths of both humans and machines and does something better than either of them would do individually.
SARAH GREEN CARMICHAEL: With photo recognition and facial recognition, I know that Facebook facial recognition software cant tell the difference between me wearing makeup and me not wearing makeup, which is also sort of funny and horrifying right? But at the same time, you know, I think a lot of us struggle to recognize people out of context, we see someone at the grocery store and we think you know, I know that person from somewhere. So, its something that humans dont always get right either.
ERIK BRYNJOLFSSON: Oh yeah. Im the worlds worst. You know at conferences I would love it if there was a little machine whispering in my ear who this person is and how I met them before. So there, you know, there are those kinds of tradeoffs. But it can lead to some risks. For instance, you know if machines are making bad decisions on important things, like who should get parole or who gets credit or not. That could be really problematic. Worse yet, sometimes they have biases that are built in from the data sets they use. If the people you hire in the past all had a certain kind of ethnic or gender tilt to them, then if you use a training set and teach the machine how to hire people it will learn the same biases that the humans had previously. And, of course, that can be perpetuated and scaled up in ways that we wouldnt like to see.
SARAH GREEN CARMICHAEL: There is a lot of hype right now around AI or artificial intelligence. Some people say machine learning, other people come along and say: hold on hold on hold on, like a lot of this is just software and weve been using it for a long time. So how do you kind of think through the different terms and what they really mean?
ERIK BRYNJOLFSSON: Well theres a really important difference between the way the machines are working now versus previously you know any McAfee and I wrote this book The Second Machine Age where we talked about having machines do more and more cognitive tasks. And for most of the past 30 or 40 years thats been done by us painstakingly programming, writing code of exactly what we want the machine to do. You know if its doing tax preparation, add up this number and multiply it by that number, and of course we had to understand exactly what the task was in order to specify it.
But now the new machine learning approaches literally have the machines learn on their own things that we dont know how to explain the face recognition is a perfect example. It would be really hard for me to describe you know my mothers face, you know how far apart are her eyes or what does her ear look like.
ERIK BRYNJOLFSSON: I can recognize it but I couldnt really write code to do it. And the way the machines are working now is, instead of having us write the code, we give them lots and lots of examples. You know here are pictures of my mom from different perspectives, or here pictures of cats and dogs or heres a piece of speech you know with the word yes and the word no. And if you give them enough examples the machine learning algorithms figure out the rules on their own.
Thats a real breakthrough. It overcomes what we call Polanyis paradox. Michael Polanyi the Polymath and philosopher from the 1960s famously said We all know more than we can tell but with machine learning we dont have to be able to tell or explain what to do. We just have to show examples. That change is whats opening up so many new applications for machines and allowing it to do a whole set of things that previously only humans could do.
SARAH GREEN CARMICHAEL: So, its interesting to think about kind of the human work that has to just go into training the machines like someone who would sit there literally looking at pictures of blueberry muffins and tagging them muffin, muffin, muffin so the machine you know learns thats not a Chihuahua, thats a blueberry muffin. Is that the kind of thing where in the future you could see that kind of rote algorithm, machine training work being kind of a low-paid dead-end job whereas maybe that person once would have had a more interesting job but now the machine has the more interesting job.
ERIK BRYNJOLFSSON: I dont think thats going to be a big source of employment, but it is true there are places like Amazons Mechanical Turk where thousands of people do exactly what you said, they tag images and label them. Thats how ImageNet the database of millions of images got labeled. And so, there are people being hired to do that. Companies sometimes find that training machines by having humans tagged the data is one way to proceed.
But often they can find ways of having data thats already tagged in some way, thats generated from their enterprise resource planning system or from their call center. And if theyre clever, that will lead to the creation of this tag data, and I should back up a bit and say that machines, one of their big weaknesses is that they really do need tag data. Thats the most powerful kind of algorithm, sometimes called supervised learning, where humans have the advanced tag and explained what the data means.
And then the machine learns from those examples and eventually can extrapolate it to other kinds of examples. But unlike humans, they often need thousands or even millions of examples to do a good job whereas you know, a two-year-old probably would learn after one or two times what a cat was versus a dog was that you wouldnt have to show, you know, 10,000 pictures of a cat before they got it.
SARAH GREEN CARMICHAEL: Right. Given where we are with AI and machine learning right now, on balance, do you feel like this is something that is overhyped and people talk too much about sort of too science fiction terms or is it something thats not quite hyped enough and actually people are underestimating what it could do in the relatively near future.
ERIK BRYNJOLFSSON: Well its actually both at the same time, if you can believe it. I think that people have unrealistic expectations about machines having all these general capabilities kind of from watching science fiction like the Terminator. And if a machine can understand Chinese characters you might think it also could understand Chinese speech and it could recommend a good Chinese restaurant, know a little bit about the Xing dynasty and none of that would be true. A machine that can play expert chess cant even play checkers or go or other games. So, in a way theyre very narrow and fragile.
But on the other hand, I think the set of applications for those narrow capabilities is quite large, using that supervised learning algorithms, I think there are a lot more specific tasks that could be done that weve only scratched the surface of and because theyve improved so much in the past five or 10 years, most of those opportunities have not yet really been explored or even discovered yet. Theres a few places where the big giants like Google and Microsoft and Facebook have made rapid progress, but I think that there are literally tens of thousands of more narrow applications that small and medium businesses could start using machine learning for in their own areas.
SARAH GREEN CARMICHAEL: What are some examples of ways that companies are using this technology right now?
ERIK BRYNJOLFSSON: Well one of my favorite ones I learned from my friend Sebastian Thrun Hes the founder of Udacity, the online learning course, which by the way is a good way to learn more about these technologies. But he found that when people were coming to his site and asking questions on the chat room, some of the sales people were doing a really good job of come to the right course and closing the sale and others, well, not so much. This created a set of training data.
He and his grad student realized that if they took the transcripts they would see that certain sets of words in certain dialogues lead to success and sales and others didnt. And he fed that information into a machine learning algorithm and it started identifying which patterns of phrases and answers were the most successful.
But what happened next was I think especially interesting instead of just trying to build a bot that would answer all the questions, they built a bot that would advise the human salespeople. So now when people go to the site the bot kind of looks over the shoulder of the human and when it sees some of those key words it whispers into his or her ear: hey, you know you might want to try this phrase or you might want to point him to this particular course.
And that works well for the most common kinds of queries, but the more obscure ones that the bot has never seen before the human is much better at. And this kind of partnership is a great example of an effective use of AI and also how you can use existing data to turn into a tag data set that the supervised learning system benefits from.
SARAH GREEN CARMICHAEL: So how did these people feel about being coached by a bot?
ERIK BRYNJOLFSSON: Well, its helped them close their sales so its made them more productive. Sebastian says its about 50% more successful when theyre using the bot. So I think its been its been beneficial in helping them learn more rapidly than they would have if they just kind of stumbled all along.
Going forward, I think this is an example of how the bots are often good at the more routine repetitive kinds of tasks, the machines can do the ones that they have lots of data for. And the humans tend to excel at the more unusual tasks for most of us. I think thats kind of a good trade-off. Most of us would prefer having kind of more interest in varied work lives rather than doing the same thing over and over.
SARAH GREEN CARMICHAEL: So, sales is a form of knowledge work right and you sort of gave an example there. One of the big challenges in that kind of work is that you cant its really hard to scale up one persons productivity if you are a law firm, for example, and you want to serve more clients have to hire more lawyers. It sounds like AI could be one way to get finally around that conundrum.
ERIK BRYNJOLFSSON: Yeah AI certainly can be a big force multiplier. Its a great way of taking some of your best, you know, lawyers or doctors and having them explain how they go about doing things and give examples of successes and the machine can learn from those and replicate it or be combined with people who are already doing the jobs and help in a way coached them or handle some of the cases that are most common.
SARAH GREEN CARMICHAEL: So, is it just about being more productive or did you see other examples of human machine collaboration that tackled different types of business challenges?
ERIK BRYNJOLFSSON: Well in some cases its a matter of being more productive, in many cases, a matter of doing the job better than you could before. So there are systems now that can help read medical images and diagnose cancer quite well, the best ones often are still combined with humans because the machines make different kinds of mistakes in the humans so that the machine often will create what are called false positives where it thinks theres cancer but its really not and the humans are better at ruling those out. You know maybe there was an eyelash on the image or something that was getting in the way.
And so, by having the machine first filter through all the images and say hey here are the ones that look really troubling. And then having a human look at those ones and focus more closely on the ones that are problematic, you end up getting much better outcomes than if that person had to look at all the images herself or himself and maybe, maybe overlook some potentially troubling cases.
SARAH GREEN CARMICHAEL: Why now? Because people predicted for a long time that I was just around the corner and sounds like its finally starting to happen and really make its way into businesses. Why are we seeing this finally start to happen right now?
ERIK BRYNJOLFSSON: Yes, thats a great question. Its really the combination of three forces that have come together. The first one is simply that we have much better computer power than we did before. So, Moores Law, the doubling of computer power is part of it. Theres also specialized chips called GPUs and TPUs that are another tenfold or even a hundredfold faster than ordinary chips. As a result, training a system that might have taken a century or more if you done it with 1990s computers can be done in a few days today.
And so obviously that opens up a whole new set of possibilities that just wouldnt have been practical before. The second big force is the explosion of digital data. Data is the lifeblood of these systems, you need them to train. And now we have so many more digital images, digital transcripts, digital data from factory gauges and keeping track of information, and that all can be fed into these systems to train them.
And as I said earlier, they need lots and lots of examples. And now we have digital examples in a way we didnt previously and in the end with the Internet are things you can imagine its going to be a lot more digital data going forward. And last but not least, there have been some significant improvements in the algorithms the men and women working in these fields have improved on the basic algorithms. Some of them were first developed literally 30 years ago, but theyve now been tweaked and improved, and by having faster computers and more data you can learn more rapidly what works and what doesnt work. When you put these three things together, computer power, more data, and better algorithms, you get sometimes as much as a millionfold improvement on some applications, for instance recognizing pedestrians as they cross the street, which of course is really important for applications like self-driving cars.
SARAH GREEN CARMICHAEL: If those are sort of the factors that are pushing us forward, what are some of the factors that might be inhibiting progress?
ERIK BRYNJOLFSSON: Whats not holding us back is the technology, what is holding us back is the imagination of business executives to use these new tools in their businesses. You know, with every general-purpose technology, whether its electricity or the internal combustion engine the real power comes from thinking of new ways of organizing your factory, new ways of connecting to your customers, new business models. Thats where the real value comes. And one of the reasons we were so happy to write for Harvard Business Review was to reach out to people and help them be more creative about using these tools to change the way they do business. Thats where the real value is.
SARAH GREEN CARMICHAEL: I feel like so much of the broader conversation that AI is about, will this create jobs or destroy jobs? And Im just wondering is that a question that you get asked a lot, and are you sick of answering it?
ERIK BRYNJOLFSSON: Well of course it gets asked a lot. And Im not sick of answering because its really important. I think the biggest challenge for our society over the next 10 years is going to be, how are we going to handle the economic implications of these new technologies. And you introduced me in the beginning as a cautious optimist, I think you said, and I think thats about right. I think that if we handle this well this can and should be the best thing that ever happened to humanity.
But I dont think its automatic. Im cautious about that. Its entirely possible for us to not invest in the kind of education and retraining of people to not do the kinds of new policies, to encourage business formation and new business models even. Income distribution has to be rethought and tax policy things like the earned income tax credit in the United States and similar wage subsidies in other countries.
ERIK BRYNJOLFSSON: We need to make a bunch of changes across the board at the policy level. Businesses need to rethink how they work. Individuals need to take personal responsibility for learning the new skills that are going to be needed going forward. If we do all those things Im pretty optimistic.
But I wouldnt want people to become complacent, because already over the past 10 years a lot of people have been left behind by the digital revolution that weve had so far. And looking forward, Id say we aint seen nothing yet. We have incredibly powerful technologies especially in artificial intelligence that are opening up new possibilities. But I want us to think about how we can use technology to create shared prosperity for the many, not just the few.
SARAH GREEN CARMICHAEL: Are there tasks or jobs that machine learning, in your opinion, cant do or wont do?
ERIK BRYNJOLFSSON: Oh, there are so many. Just to be totally clear, most things, machine learning cant do. Its able to do a few narrow areas really, really well. Just like a calculator can do a few things really, really well, but humans are much more general, much more broad set of skills, and the set of skills that humans can do it is being encroached on.
Machines are taking over more and more tasks are combining, teaming up in more and more tasks but in particular, machines are not very good at very broad-scale creativity you know. Being an entrepreneur or writing a novel or developing a new scientific theory or approach, those kinds of creativity are beyond what machines can do today by and large.
Secondly, and perhaps for an even broader impact, is interpersonal skills, connecting with the humans. You know were wired to trust and care for it and be interested in other humans in a way that we arent with other machines.
So, whether its coaching or sales or negotiation or caring for people, persuading, people those are all areas where humans have an edge. And I think there will be an explosion of new jobs whether its for personal coaches or trainers or team oriented activities. I would love to see more people learning those kinds of softer skills that machines are not good at. Thats where theyll be a lot of jobs in the future.
SARAH GREEN CARMICHAEL: I was surprised to see in the article though, that some of these AI programs are actually surprisingly good at recognizing human emotions. I was really startled by that.
ERIK BRYNJOLFSSON: I have to be careful. One of the main things I learned working with Andy and going to visit all these places is never say never, any particular thing that one of us said oh this will never happen, you know, we find out that someone is working in a lab.
So my advice is that their relative strengths and relative weaknesses and emotional intelligence, I still think is a relative strength of humans, but there are particular narrow applications where machines are improving quite rapidly. Affectiva, a company here in Boston has gotten very good at reading emotions, is part of what you need to do to be a good coach to be a caring person, is not the whole picture, but it is one piece of the interpersonal skills that machines are helping with.
SARAH GREEN CARMICHAEL: What do you see as the biggest risks with AI?
ERIK BRYNJOLFSSON: I think there are a few. One of the big risks is that these machine learning algorithms can have implicit biases and they can be very hard to detect or correct. If the training data is biased, has some kind of racial or ethnic or other biases in its data, then those can be perpetuated in the sample. And so, we need to be very careful about how we train the systems and what data we give them.
And its especially important because they dont have the kind of explicit rules that earlier waves of technology had. So, its hard to even know. Its unlikely to have a rule that says, you know, dont give loans to black people or whatever, but it may implicitly have its thumb on the scale in one way or the other if the training data were biased.
SARAH GREEN CARMICHAEL: Right. Because it might notice for instance that, statistically speaking, black people get turned down more for loans that kind of thing.
ERIK BRYNJOLFSSON: Yeah, if the people who you had made those decisions before were biased in a use for the training data that could end up creating a biased training set. And you know, maybe nobody explicitly says that they were biased, but it sort of shows up in other subtle ways based on the, you know, the zip code that someones coming from or their last name or their first name or whatever. So that would be subtle things that you need to be careful of.
The other thing is what we touched on earlier just the whole, whats happening with income inequality and opportunity as the machines get better at many kinds of tasks, you know, driving a truck or handling a call center. The people who had been doing those jobs need to find new things to do. And often those new jobs wont be paying as well if we arent careful. So that could be a real income hit. Already we see growing income inequality.
We have to be aggressive about thinking how we can create broadly shared prosperity. One of the things we did at MIT is we launched something called the Inclusive Innovation Challenge which recognizes and rewards organizations that are using technology to create shared prosperity, theyre innovating in ways that do that. Id love to see more and more entrepreneurs think in that way not just how they can create concentrated wealth, but how they can create broadly shared prosperity.
SARAH GREEN CARMICHAEL: Elon Musk has been out there saying artificial intelligence could be an existential threat to human beings. Other people have talked about fears that the machines could take over and turn against us. How do you feel about those kinds of concerns?
ERIK BRYNJOLFSSON: Well, like I said earlier, you can never say never and, you know, as machines kept getting more and more powerful I can imagine them having enormous powers especially as we delegate more of the operations of our critical infrastructure in our electricity and our water system and our air traffic control and even our military operations to them. But the reason I didnt list it is I dont see it as the most immediate risk right now, the technologies that are being rolled out right now, they have effects on bias and decision making their effect on jobs and income. But by and large they dont have those kinds of existential risks.
I think its important that we have researchers working in those areas and thinking about them but I wouldnt want to, to panic Congress or the people right now into doing something that would probably be counterproductive if we overreacted right now.
I think its an area for research but in terms of devoting billions of dollars of effort, I would put that towards education and retraining and handling bias the things that are facing us right now and will be facing us for the next five and 10 years.
SARAH GREEN CARMICHAEL: What do you feel is the appropriate role of regulation as AI develops?
ERIK BRYNJOLFSSON: I think we need to be watchful, because theres the potential for AI to lead to more concentration of power and more concentration of wealth. The best antidote to that is competition.
And what weve seen the tech industries, for most of the past 10, 20, 30 years is that as one monopolist, whether its IBM or Microsoft, gets a lot of power, another company comes along and knocks it off its perch. I remember teaching a class where about 15 years ago a speaker said you know Yahoo has search locked up no ones ever going to displace Yahoo. So you know we need to be humble and realize that the giants of today face threats and could be overturned.
That said, if there becomes a sort of a stagnant loss of innovation and these companies have a stranglehold on markets and maybe have other adverse effects in areas like privacy, then it would be right for government to step in. My instinct right now would be sort of watchful waiting, keeping an eye on these companies and doing what we could to foster innovation and competition as the best way to protect consumers.
SARAH GREEN CARMICHAEL: So, if all of this still sounds quite futuristic to the average manager, if theyre kind of like: OK, you know this is sort of way outside of what Im working on in my role, what are the sort of things that youd advise people to keep in mind or think about?
ERIK BRYNJOLFSSON: Well it starts with realizing this is not futuristic and way out there. There are lots of small and medium sized companies that are learning how to apply this right now, whether its, you know, sorting cucumbers to be more effective, somebody wrote an application that did that, to helping with recommendations online. Theres a company Im advising called Infinite Analytics that is giving customers better recommendations about what products they should be choosing, to helping with, you know, credit decisions.
There are so many areas where you can apply these technologies right now you can take courses or you can have people in your organization take courses or you can hire people at places like Udacity or fast.ai, my friend Jeremy Howard runs a great course in that area, and put it to work right away and start with something small and simple.
But definitely dont think of this as futuristic. Dont be put off by the science fiction movies whether, you know, the Terminator or other AI shows. Thats not whats going on. Its a bunch of very specific practical applications that are completely feasible in 2017.
SARAH GREEN CARMICHAEL: Erik, thanks so much for talking with us today about all of this.
ERIK BRYNJOLFSSON: Its been a real pleasure.
SARAH GREEN CARMICHAEL: Thats Erik Brynjolfsson. Hes the director of the MIT Initiative on the Digital Economy. And hes the co-author with Andrew McAfee of the new HBR article, The Business of Artificial Intelligence.
You can read their HBR article, and also read about how Facebook uses AI and Machine learning in almost everything you see, and you can watch a video shot in my own kitchen! about how IBMs Watson uses AI to create new recipes. Thats all at hbr.org/AI.
Thanks for listening to the HBR IdeaCast. Im Sarah Green Carmichael.
Here is the original post:
How AI Is Already Changing Business - Harvard Business Review
Posted in Ai
Comments Off on How AI Is Already Changing Business – Harvard Business Review
Google’s Newest AI is Turning Street View Images into Landscape Art – Futurism
Posted: at 12:16 pm
In Brief Google engineers have created an artificial intelligence (AI) that is capable of turning Google Street View images into professional-quality artistic portraits. The AI chooses and crops the image, alters both light and coloration, and then applies an appropriate filter. Google Art
Most of us are probably familiar with Google Street View; a feature of Google Maps that allows users to see actual images of the areas theyre looking up. Its both a useful navigational feature and one that allows people to explore far-off regions just for fun. Engineers at Google are taking these images from Street View one step further with the help of artificial intelligence (AI).
Hui Feng is one of several software engineers who are using machine learning techniques to teach a neural network how to scan Street View in search of exceptionally beautiful images. This AI then, on its own, mimics the workflow of a professional photographer.
This AI system will act as an artist and photo editor, recognizing beauty and specific aspects that make for a good photograph. Despite being a subjective matter, the AI proved to be successful, creating professional-quality imagery from Street View images that the system itself located.
Googles many different AI programs have been exploring a wide variety of potential applications for the technology. From recent dabbling in online Go playingto improving job huntingand even creating its own AI better than Google engineers, Googles AI has been at the forefront of its field.
But AI technologies are progressing faster and further than many have expected, so much so that some AI, like the one mentioned here, are capable of creating art. So, while robots will never make humans completely obsolete in artistic endeavors, this step forward marks a new era of technology.
Read the original:
Google's Newest AI is Turning Street View Images into Landscape Art - Futurism
Posted in Ai
Comments Off on Google’s Newest AI is Turning Street View Images into Landscape Art – Futurism
Graphcore’s AI chips now backed by Atomico, DeepMind’s Hassabis – TechCrunch
Posted: at 12:16 pm
Is AI chipmaker Graphcore out to eat Nvidias lunch? Co-founder and CEO Nigel Toon laughs at that interview opener perhaps because he sold his previous company to the chipmaker back in 2011.
Im sure Nvidia will be successful as well, he ventures. Theyre already being very successful in this market And being a viable competitor and standing alongside them, I think that would be a worthy aim for ourselves.
Toon also flags what he couches an interesting absence in the competitive landscape vis-a-vis other major players that youd expect to be there e.g. Intel. (Though clearly Intel is spending to plug the gap.)
A recent report by analyst Gartner suggests AI technologies will be in almost every software product by 2020. The race for more powerful hardware engines to underpin the machine-learning software tsunami is, very clearly, on.
We started on this journey rather earlier than many other companies, says Toon. Were probably two years ahead, so weve definitely got an opportunity to be one of the first people out with a solution that is really designed for this application. And because were ahead weve been able to get the excitement and interest from some of these key innovators who are giving us the right feedback.
Bristol, UK based Graphcore has just closed a $30 million Series B round, led by Atomico, fast-following a $32M Series A in October 2016. Its building dedicated processing hardware plus a software framework for machine learning developers to accelerate building their own AI applications with the stated aim of becoming the leader in the market for machine intelligence processors.
In a supporting statement, Atomico Partner Siraj Khaliq, who is joining the Graphcore board, talks up its potential as being to accelerate the pace of innovation itself. Graphcores first IPU delivers one to two orders of magnitude more performance over the latest industry offerings, making it possible to develop new models with far less time waiting around for algorithms to finish running, he adds.
Toon says the company saw a lot of investor interest after uncloaking at the time of its Series A last October hence it decided to do an earlier than planned Series B. That would allow us to scale the company more quickly, support more customers, and just grow more quickly, he tells TechCrunch. And it still gives us the option to raise more money next year to then really accelerate that ramp after weve got our product out.
The new funding brings on board some new high profile angel investors including DeepMind co-founder DemisHassabis and Uber chief scientistZoubin Ghahramani. So you can hazard a pretty educated guess as to which tech giants Graphcore might be working closely with during the development phase of its AI processing system (albeit Toon is quick to emphasize that angels such as Hassabis are investing in a personal capacity).
We cant really make any statements about what Google might be doing, he adds. We havent announced any customers yet but were obviously working with a number of leading players here and weve got the support from these individuals which you can infer theres quite a lot of interest in what were doing.
Other angels joining the Series B includeOpenAIs Greg Brockman, Ilya Sutskever,Pieter Abbeel andScott Gray. While existing Graphcore investors Amadeus Capital Partners,Robert Bosch Venture Capital, C4 Ventures, Dell Technologies Capital, Draper Esprit, Foundation Capital, Pitango and Samsung Catalyst Fund also participated in the round.
Commenting in a statement, Ubers Ghahramani argues that current processing hardware is holding back the development of alternative machine learning approaches that he suggests could contribute to radical leaps forward in machine intelligence.
Deep neural networks have allowed us to make massive progress over the last few years, but there are also many other machine learning approaches, he says.A new type of hardware that can support and combine alternative techniques, together with deep neural networks, will have a massive impact.
Graphcore has raised around $60M to date with Toon saying its now 60-strong team has been working in earnest on the business for a full three years, though the company origins stretch back as far as 2013.
Co-founders Nigel Toon (CEO, left) and Simon Knowles (CTO, right)
In 2011 the co-founders sold their previous company, Icera which did baseband processing for 2G, 3G and 4G cellular technology for mobile comms to Nvidia. After selling that company we started thinking about this problem and this opportunity. We started talking to some of the leading innovators in the space and started to put a team together around about 2013, he explains.
Graphcore is building what it calls an IPU aka an intelligence processing unit offering dedicated processing hardware designed for machine learning tasks vs the serendipity of repurposed GPUs which have been helping to drive the AI boom thus far. Or indeed the vast clusters of CPUs needed (but not well suited) for such intensive processing.
Its also building graph-framework software for interfacing with the hardware, called Poplar, designed to mesh with different machine learning frameworks to enable developers to easily tap into a system that it claims will increase the performance of both machine learning training and inference by 10x to 100x vs the fastest systems today.
Toon says its hoping to get the IPU in the hands of early access customers by the end of the year. That will be in a system form, he adds.
Although at the heart of what were doing is were building a processor, were building our own chip leading edge process, 16 nanometer were actually going to deliver that as a system solution, so well deliver PCI express cards and well actually put that into a chassis so that you can put clusters of these IPUs all working together to make it easy for people to use.
Through next year well be rolling out to a broader number of customers. And hoping to get our technology into some of the larger cloud environments as well so its available to a broad number of developers.
Discussing the difference between the design of its IPU vs GPUs that are also being used to power machine learning, he sums it up thus: GPUs are kind of rigid, locked together, everything doing the same thing all at the same time, whereas we have thousands of processors all doing separate things, all working together across the machine learning task.
The challenge that [processing via IPUs] throws up is to actually get those processors to work together, to be able to share the information that they need to share between them, to schedule the exchange of information between the processors and also to create a software environment thats easy for people to program thats really where the complexity lies and thats really what we have set out to solve.
I think weve got some fairly elegant solutions to those problems, he adds. And thats really whats causing the interest around what were doing.
Graphcores team is aiming for a completely seamless interface between its hardware via its graph-framework and widely used high level machine learning frameworks including Tensorflow, Caffe2, MxNet and PyTorch.
You use the same environments, you write exactly the same model, and you feed it through what we call Poplar [a C++ framework], he notes. In most cases that will be completely seamless.
Although he confirms that developers working more outside the current AI mainstream say by trying to create new neural network structures, or working with other machine learning techniques such as decision trees or Markov field may need to make some manual modifications to make use of its IPUs.
In those cases there might be some primitives or some library elements that they need to modify, he notes. The libraries we provide are all open so they can just modify something, change it for their own purposes.
The apparently insatiable demand for machine learning within the tech industry is being driven at least in part by a major shift in the type of data that needs to be understood from text to pictures and video, says Toon. Which means there are increasing numbers of companies that really need machine learning. Its the only way they can get their head around and understand what this sort of unstructured data is thats sitting on their website, he argues.
Beyond that, he points to various emerging technologies and complex scientific challenges its hoped could also benefit from accelerated development of AI from autonomous cars to drug discovery with better medical outcomes.
A lot of cancer drugs are very invasive and have terrible side effects, so theres all kinds of areas where this technology can have a real impact, he suggests. People look at this and think its going to take 20 years [for AI-powered technologies to work] but if youve got the right hardware available [development could be sped up].
Look at how quickly Google Translate has got better using machine learning and that same acceleration I think can apply to some of these very interesting and important areas as well.
In a supporting statement, DeepMinds Hassabis goes to far as to suggest that dedicated AI processing hardware might also offer a leg up to the sci-fi holy grail goal of developing artificial general intelligence (vs the more narrow AIs that comprise the current cutting edge).
Building systems capable of general artificial intelligence means developing algorithms that can learn from raw data and generalize this learning across a wide range of tasks. This requires a lot of processing power, and the innovative architecture underpinning Graphcores processors holds a huge amount of promise, he says.
Read more from the original source:
Graphcore's AI chips now backed by Atomico, DeepMind's Hassabis - TechCrunch
Posted in Ai
Comments Off on Graphcore’s AI chips now backed by Atomico, DeepMind’s Hassabis – TechCrunch
Have We Reached Peak AI Hysteria? – Niskanen Center (press release) (blog)
Posted: at 12:16 pm
July 21, 2017 by Ryan Hagemann
At the recent annual meeting of the National Governors Association, Elon Musk spoke with his usual cavalier optimism on the future of technology and innovation. From solar power to our place among the stars, humanitys future looks pretty bright, according to Musk. But he was particularly dour on one emerging technology that supposedly poses an existential threat to humankind: artificial intelligence.
Musk called for strict, preemptive regulations on developments in AI, referencing numerous hypothetical doomsaying scenarios that might emerge if we go too far too fast. Its not the first time Musk has said that AI could portend a Terminator-style future, but it does seem to be the first time hes called for such stringent controls on the technology. And hes not alone.
In the preface to his book Superintelligence, Nick Bostrom contends that developing AI is quite possibly the most important and most daunting challenge humanity has ever faced. Andwhether we succeed or failit is probably the last challenge we will ever face. Even Stephen Hawking has jumped on the panic wagon.
These concerns arent uniquely held by innovators, scientists, and academics. A Morning Consult poll found that a significant majority of Americans supported both domestic and international regulations on AI.
All of this suggests that we are in the midst of a full blown AI techno-panic. Fear of mass unemployment from automation and public safety concerns over autonomous vehicles have only exacerbated the growing tensions between man and machine.
Luckily, if history is any guide, the height of this hysteria means were probably on the cusp of a period of deflating dread. New emerging technologies often stoke frenzied fears over worst-case scenariosat least at the beginning. These concerns eventually rise to the point of peak alarm, followed by a gradual hollowing out of panic. Eventually, the technologies that were once seen as harbingers of the end times become mundane, common, and indispensable parts of our daily lives. Look no further than the early days of the automobile, RFID chips, and the Internet; so too will it be with AI.
Of course detractors will argue that we should hedge against worst-possible outcomes, especially if the costs are potentially civilization-ending. After all, if theres something the government could do to minimize the costs while maximizing the benefits of AI, then policymakers should be all over that. So whats the solution?
Gov. Doug Ducey (R-AZ) asked that very question: Youve given some of these examples of how AI can be an existential threat, but I still dont understand, as policymakers, what type of regulations, beyond slow down, which typically policymakers dont get in front of entrepreneurs or innovators should be enacted. Musks response? First, government needs to gain insight by standing up an agency to make sure the situation is understood. Then put in place regulations to protect public safety. Thats it. Well, not quite.
The government has, in fact, already taken a stab at whether or not such an approach would be an ideal treatment of this technology. Last year, the Obama administrations Office of Science and Technology Policy released a report on the future of AI, derived from hundreds of comments from industry, civil society, technical experts, academics, and researchers.
While the report recognized the need for government to be privy to ongoing developments, its recommendations were largely benignand it certainly didnt call for preemptive bans and regulatory approvals for AI. In fact, it concluded that it was very unlikely that machines will exhibit broadly-applicable intelligence comparable to or exceeding that of humans in the next 20 years.
In short, put off those end-of-the-world parties, because AI isnt going to snuff out civilization any time soon. Instead, embracing preemptive regulations could just smother domestic innovation in this field.
Despite Musks claims, firms will actually outsource research and development elsewhere. Global innovation arbitrage is a very real phenomenon in an age of abundant interconnectivity and capital that can move like quicksilver across national boundaries. AI research is even less constrained by those artificial barriers than most technologies, especially in an era of cloud computing and diminishing costs to computer processing speedsto say nothing of the rise of quantum computing.
Musks solution to AI is uncharacteristically underwhelming. New federal agencies that impose precautionary regulations on AI arent going to chart a better course to the future, any more than preemptive regulations for Google would have paved the way to our current age of information abundance.
Musk of all people should know the future is always rife with uncertaintyafter all, he helps construct it with each new revolutionary undertaking. Imagine if there had been just a few additional regulatory barriers for SpaceX or Tesla to overcome. Would the world have been a better place if the public good demanded even more stringent regulations for commercial space launch or autopilot features? Thats unlikelyand, notwithstanding Musks apprehensions, the same is probably true for AI.
Excerpt from:
Have We Reached Peak AI Hysteria? - Niskanen Center (press release) (blog)
Posted in Ai
Comments Off on Have We Reached Peak AI Hysteria? – Niskanen Center (press release) (blog)
This famous roboticist doesn’t think Elon Musk understands AI – TechCrunch
Posted: July 20, 2017 at 3:13 am
Earlier this week, at the campus of MIT, TechCrunch had the chance to sit down with famed roboticist Rodney Brooks, the founding director of MITs Computer Science and Artificial Intelligence Lab, and the cofounder of both iRobot and Rethink Robotics.
Brooks had a lot to say about AI, including his overarching concern that many people including renowned AI alarmist Elon Musk get it very wrong, in his view.
Brooks also warned that despite investors fascination with robotics right now, many VCs may underestimate how long these companies will take to build a potential problem for founders down the road.
Our chat, edited for length, follows.
TC: You started iRobot when there was no venture funding, back in 1990. You started Rethink in 2008, when there was funding but not a lot of interest in robotics. Now, there are both, which seemingly makes it a better time to start a robotics company. Is it?
RB: A lot of Silicon Valley and Boston VCs sort of fall over themselves about how theyre funding robotics [now], so you [as a founder] can get heard.
Despite [investors who say there is plenty of later-stage funding for robotics] , I think its hard for VCs to understand how long these far-out robotics systems will really take to get to where they can get a return on their investment, and I think thatll be crunch time for some founders.
TC: Theres also more competition and more patents that have been awarded, and a handful of companies have most of the worlds data. Does that make them insurmountable?
RB: Someone starting a robotics company today should be thinking that maybe at some point, in order to grow, theyre going to have to get bought by a large company that has the deep pockets to push it further. The ecosystem would still use the VC funding to prune out the good ideas from the bad ideas, but going all the way to an IPO may be hard.
Second thing: On this data, yes, machine learning is fantastic, it can do a lot, but there are a lot of things that need to be solved that are not just purely software; some of the big innovations [right now] have been new sorts of electric motors and controls systems and gear boxes.
TC: Youre writing a book on AI, so I have to ask you: Elon Musk expressed again this past weekend that AI is an existential threat. Agree? Disagree?
RB: There are quite a few people out there whove said that AI is an existential threat: Stephen Hawking, astronomer Royal Martin Rees, who has written a book about it, and they share a common thread, in that: they dont work in AI themselves. For those who do work in AI, we know how hard it is to get anything to actually work through product level.
Heres the reason that people including Elon make this mistake. When we see a person performing a task very well, we understand the competence [involved]. And I think they apply the same model to machine learning. [But they shouldnt.] When people saw DeepMinds AlphaGo beat the Korean champion and then beat the Chinese Go champion, they thought, Oh my god, this machine is so smart, it can do just about anything! But I was at DeepMind in London about three weeks ago and [they admitted that things could easily have gone very wrong].
TC: But Musks point isnt that its smart but that its going to be smart, and we need to regulate it now.
RB: So youre going to regulate now. If youre going to have a regulation now, either it applies to something and changes something in the world, or it doesnt apply to anything. If it doesnt apply to anything, what the hell do you have the regulation for? Tell me, what behavior do you want to change, Elon? By the way, lets talk about regulation on self-driving Teslas, because thats a real issue.
TC:Youve raised interesting points about this in your writings, noting that the biggest worry about autonomous cars whether theyll have to choose between driving into a gaggle of baby strollers versus a group of elderly women is absurd, considering how often that particular scenario happens today.
RB:There are some ethical questions that I think will slow down the adoption of cars. I live just a few blocks [from MIT]. And three times in the last three weeks, I have followed every sign and found myself at a point where I can either stop and wait for six hours, or drive the wrong way down a one-way street. Should autonomous cars be able to decide to drive the wrong way down a one-way street if theyre stuck? What if a 14-year-old riding in an Uber tries to override it, telling it to go down that one-way street? Should a 14-year-old be allowed to drive the car by voice? There will be a whole set of regulations that were going to have to have, that people havent even begun to think about, to address very practical issues.
TC: You obviously think robots are very complementary to humans, though there will be job displacement.
RB:Yes, theres no doubt and it will be difficult for the people who are being displaced. I think the role in factories, for instance, will shift from people doing manual work to people supervising. We have a tradition in manufacturing equipment that it has horrible user interfaces and its hard and you have to take courses, whereas in consumer electronics [as with smart phones], we have made the machines we use teach the people how to use them. And I do think we need to change our attitude in industrial equipment and other sorts of equipment, to make the machines teach the people how to use them.
TC: But do we run the risk of not taking this displacement seriously enough? Isnt the reason we have our current administration because we arent thinking enough about the people who will be impacted, particularly in the middle of the country?
RB: Theres a sign that maybe I should have seen and didnt. When I started Rethink Robotics, it was called Heartland Robotics. Id just come off six years of being an adviser to the CEO of John Deere; Id visited every John Deere factory. I could see the aging population. I could see they couldnt get workers to replace the aging population. So I started Heartland Robotics to build robotics to help the heartland.
Its no longer called Heartland Robotics because I started to get comments like, Why didnt you just come out and call it Bible Belt Robotics? The people in the Midwest thought we were making fun of them. I should have now, in retrospect, thought of that a little deeper.
TC: If you hadnt started Rethink, what else would you want to be focused on right now?
RB: Im a robotics guy, so every problem I think I can solve has a robotics solution. But what are the sorts of things that are important to humankind, which the current model of either large companies investing in or VCs investing in, arent going to solve? For instance: plastics in the ocean. Its getting worse; its contaminating our food chain. But its the problem of the commons. Who is going to fund a startup company to get rid of plastics in the ocean? Whos going to fund that, because whos going to [provide a return for those investors] down the line?
So Im more interested in finding places where robotics can help the world but theres no way currently of getting the research or the applications funded.
TC: Youre thought as the father of modern robotics. Do you feel like you have to be out there, evangelizing on the part of robotics and roboticists, so people understand the benefits, rather than focus on potential dangers?
RB: Its why Im right now writing a book on AI and robotics and the future because people are getting too scared about the wrong things and not thinking enough about what the real implications will be.
Go here to read the rest:
This famous roboticist doesn't think Elon Musk understands AI - TechCrunch
Posted in Ai
Comments Off on This famous roboticist doesn’t think Elon Musk understands AI – TechCrunch
AI data-monopoly risks to be probed by UK parliamentarians – TechCrunch
Posted: at 3:13 am
The UKs upper house of parliament is asking for contributions to an enquiry into the socioeconomic and ethical impacts of artificial intelligence technology.
Among the questions the House of Lords committee will consider as part of the enquiry are:
The committee says it is looking for pragmatic solutions to the issues presented, and questions raised by the development and use of artificial intelligence in the present and the future.
Commenting in a statement, Lord Clement-Jones, chairman of the Select Committee on Artificial Intelligence, said: This inquiry comes at a time when artificial intelligence is increasingly seizing the attention of industry, policymakers and the general public. The Committee wants to use this inquiry to understand what opportunities exist for society in the development and use of artificial intelligence, as well as what risks there might be.
We are looking to be pragmatic in our approach, and want to make sure our recommendations to government and others will be practical and sensible. There are significant questions to address relevant to both the present and the future, and we want to help inform the answers to them. To do this, we need the help of the widest range of people and organisations.
If you are interested in artificial intelligence and any of its aspects, we want to hear from you. If you are interested in public policy, we want to hear from you. If you are interested in any of the issues raised by our call for evidence, we want to hear from you, he added.
The committees call for evidence can be found here. Written submissions can be submitted via this webform on the committees webpage.
The deadline for submissions to the enquiry is September 6, 2017.
Concern over the societal impacts of AI has been rising up the political agenda in recent times, with another committee of UK MPs warning last fall the government needs to take proactive steps tominimise bias being accidentally built into AI systems and ensure transparency so that autonomous decisions can be audited and systems vettedto ensure AI tech is operating as intended and that unwanted, or unpredictable, behaviours are not produced.
Another issue that weve flaggedhere on TechCrunch is the risk of valuable publicly funded data-sets effectively being asset-stripped by tech giants hungry for data to feed and foster commercial AI models.
Since 2015, for example, Google-owned DeepMind has been forging a series of data-sharing partnerships with National Health Service Trusts in the UK which has provided it withaccess to millions of citizens medical information. Some of these partnerships explicitly involve AI; in other cases it has started by building clinical task management apps yet applying AI to the same health data-sets is a stated, near-termambition.
It alsorecently emergedthat DeepMind is not charging NHS Trusts for the app development and research work its doing with them rather its price appears to be access to what are clearly highly sensitive (and publicly funded) data-sets.
This is concerning as there are clearly only a handful of companies with deep enough pockets to effectively buy access to highly sensitive publicly-funded data-sets i.e. by offering five years of free work in exchange for access using that data to develop a new generation of AI-powered products. A small startup cannot hope to compete on the same terms as the Alphabet-Google behemoth.
The risk ofdata-based monopolies and winner-takes-all economics from big techs big data push to garner AI advantage should be loud and clear. As should the pressing need for public debate on how best to regulate this emerging sector so that future wealth and any benefits derived from the power of AI technologies can be widely distributed, rather than simply locking in platform power.
In another twist pertaining to DeepMind Healths activity in the UK, the countrys data protection watchdog ruled earlier this month that the companys first data-sharing arrangement with an NHS Trust broke UK privacy law. Patients consent had not been sought nor obtained for the sharing of some 1.6 million medical records for the purpose of co-developing a clinical task management app to provide alerts of the risk of a patient developing a kidney condition.
The Royal Free NHS Trust now has three monthsto change how it works with DeepMind to bring the arrangement into compliance with UK data protection law.
In that instance the app in question does not involve DeepMind applying any AI. However, in January 2016, the company and the same Trust agreed on wider ambitions to apply AI to medical data sets within five years. So the NHS app development freebies that DeepMind Health is engaged with now are clearly paving the way for a broad AI push down the line.
Commenting on the Lords enquiry, Sam Smith, coordinator of health data privacy group, medConfidential an early critic of how DeepMind was being handed NHS patient data told us: This inquiry is important, especially given the unlawful behaviour weve seen from DeepMinds misuse of NHS data. AI is slightly different, but the rules still apply, and this expert scrutiny in the public domain will move the debate forward.
Link:
AI data-monopoly risks to be probed by UK parliamentarians - TechCrunch
Posted in Ai
Comments Off on AI data-monopoly risks to be probed by UK parliamentarians – TechCrunch
Think Tank: Is AI the Future of E-commerce Fraud Prevention? – WWD
Posted: at 3:13 am
Theres a lot of debate about what Artificial Intelligence really means, and how we should feel about it. Will it transform our world for the better? Will the machines take over? Will it simply make processes we already perform faster and smoother? As Gartner says in A Framework for Applying AI in the Enterprise, The artificial intelligence acronym AI might more appropriately stand for amazing innovations that do what we thought technology couldnt do.
One way and another, were talking about smart machines machines that are trained on existing, historical data, and use that to make accurate deductions or predictions about examples with which theyre presented. The applications are wide-ranging, from medicine to retail to self-driving cars and beyond.
For e-commerce, AI means the ability to deliver capabilities that simply were not possible before. There are two main directions in which this expresses itself:
1) Uncovering trends and audiences: A well-trained e-commerce AI can identify trends of buyer behavior, or interests in new products or experiences and adapt quickly.
2) Personalization: The experience can be tailored to each customer in ways that were not an option when companies had to configure/design the experience for everyone at once (or maybe have a few versions based on geographies). Customers can be offered the information and products they want, when they want them, in the ways that are best suited to them.
Why Ive Come to Love AI
As someone who travels a lot, I often have a fairly complex customer story when I shop online. I might be on a work trip to China, using a proxy to shop on a favorite U.S. store with my credit card, which has a New York billing address, sending something to an office in San Francisco to pick up on my next stop. Theres a good chance Ill be on a mobile device, and since I like to explore new things, Im often buying something of a kind Ive never bought before.
All of this makes me unpopular with e-commerce fraud prevention systems. Ive lost count of the number of times Ive been rejected, or delayed for days while my order is painstakingly reviewed. Sometimes Ive moved on by the time the package finally arrives at the place to which I had ordered it.
The thing is, I get it. I was a fraud prevention analyst myself, back in the time before AI was an option. I know exactly how hard these transactions are to get right, from the human perspective. I know how long it can take to review a transaction, and that as an analyst the tendency is always to play it safe even if that means sending a good customer away.
AI isnt a magic tool, but properly leveraging AI can enable retailers to eat the cake driving their sales upward by creating frictionless, speedy buying experiences for consumers and have it, too be completely protected against online payment fraud.
The 3 Unmatched Advantages of AI-based Fraud Protection Systems
Scale:An AI system can look at 6,000 data points in every transaction, and match them with billions of other transactions to look for patterns, discrepancies, and simple coincidences of events in just a fraction of a second. This means that all fraud decisions can happen 100 percent in real-time, regardless of how much traffic the site is receiving, or whether the fraud team is down with the flu.
Accuracy:In the last year a well-built and trained fraud protection AI has proven repeatedly that it outperforms even the best human reviewers in accuracy. For retailers the reduction in false declines (good customers mistakenly rejected as fraud) means more sales, and happier consumers, and the reduction in fraud chargebacks means lower costs, and lower risk. Beyond that, it enables new business models that were previously considered too risky, like the growing popularity of the try-and-buy model.
Adaptivity:In fraud prevention, one of the great challenges is the speed of learning necessary in order to deal with new fraudulent modaoperandi. If a fraudster finds a new technique that works, it will spread like wildfire and hundreds of fraudsters will attack thousands of retailers at once. An AI-based solution is the only realistic way for retailers to fight fraud together in this highly dynamic environment, combining their efforts and sharing data in a centralized way to prevent fraudsters from abusing one retailer after another. In fact, AI has the potential to actually reverse the asymmetry and push the fraudsters back. From the criminal point of view, if a new method to defraud is blocked almost immediately after it is first conceived and tried out, it isnt worth investing in.
AI is the future of e-commerce fraud prevention. It brings scale, accuracy and adaptivity to improve customer experience, block fraud and increase sales. Some retailers have already started leveraging AI, and theyre gaining a competitive advantage in this highly competitive field. Better fraud prevention is about to become standard. No site can afford to get left behind.
Michael Reitblat is chief executive officer of Forter.
For More Business News From WWD, See:
Amazon, Wal-Mart and Apple Top List of Biggest E-commerce Retailers
Consumer Preferences Reshaping Retail Landscape
Supima Design Competition Set for Sept. 7 at Pier 59 Studios
Excerpt from:
Think Tank: Is AI the Future of E-commerce Fraud Prevention? - WWD
Posted in Ai
Comments Off on Think Tank: Is AI the Future of E-commerce Fraud Prevention? – WWD