The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
Report: Why the big challenges in AI aren’t close to being solved – TechRepublic
Posted: February 24, 2017 at 6:26 pm
Image: iStockphoto/ktsimage
As tech companies continue to dump mountains of cash into artificial intelligence (AI) development, the technology promises to greatly improve our digital lives. However, the AI ecosystem still has major problems to solve before it can advance, a new report said.
The report, released Friday from Edison Investment Research, said that AI has the potential to be a major differentiator but it is still in the early stages of its development. Most of what is referred to as AI, currently, is "simply advanced statistics," the report claims.
SEE: Understanding the differences between AI, machine learning, and deep learning
According to the Edison Investment Research report, there are three goals that must be solved for AI to move out of its infancy.
The company that performs the best in solving these problems is likely to move ahead of its competitors, the report noted.
For most companies, the initial investment in AI comes in the form of a digital assistant or chat bot. These tools are often being offered free of charge, or folded into other core products, in order to generate and collect the data needed to strengthen the AI behind them. Digital assistant are "a good first yardstick of each ecosystem's competence in AI," the report said.
AI is built on data, as is another product many people use everyday: Search engines. As such, it makes sense that companies like Google, Baidu, and Russia's Yandex are growing leaders in the AI space due to their focus on data-powered search. Under these leader, companies like Microsoft, Apple, and Amazon are also investing heavily in their own AI efforts as well.
"Both Microsoft and Amazon have scope to earn a return on AI in their businesses that are not part of the ecosystem," the report said. "Apple appears to have voluntarily hobbled its AI development with differential privacy."
Facebook, however, was ranked much lower in the report. Edison researchers even called it a "laggard with one of the weakest positions in AI globally." The core issues that Facebook is facing have to do with its inability to properly leverage automation, and its late start in the market. Additionally, Facebook's disastrous attempt to automate the removal of fake news further demonstrated how weak its AI was.
"The net result is that without a human element, almost all of Facebook's services that depend on AI tend to fall over pretty quickly," the report said.
At this point, many of the tech playter investigating AI solution are struggling to make sense of their own data, the report noted. However, with investment ramping up in the space and competition increasing, the AI market is ripe for development.
"The big ecosystems are all very well-funded and so both their in-house R&D and their M&A activity is likely to increase in 2017," the report said. "AI is also likely to be the buzzword in many of the trade shows in the coming 12 months and for once, it will be more than just hype."
Originally posted here:
Report: Why the big challenges in AI aren't close to being solved - TechRepublic
Posted in Ai
Comments Off on Report: Why the big challenges in AI aren’t close to being solved – TechRepublic
‘Swarm AI’ predicts winners for the 2017 Academy Awards – TechRepublic
Posted: at 6:26 pm
Image: LimaEs, Getty Images/iStockphoto
Wondering who will win the 2017 Oscars? Instead of turning to industry experts, film critics, or polls, you can try something else this year: Artificial intelligence.
A startup called Unanimous A.I. has been making predictionslike who will win the Superbowl, March Madness, US presidential debates, the Kentucky Derbyfor the last two years. It uses a software platform called UNU to assemble people at their computers, who make a real-time prediction together.
UNU's algorithm is built to harness the concept of "swarm" intelligencethe power of a group to make an intelligent, collective decision. It's how flocks of birds or bees decide where to travel for the winter, for instancea decision that no single entity could make on its own. The decisions are made quickly, in under a minute each.
When UNU first predicted the Oscars in 2015, it took a group of non-experts to guess the Academy Award winnersand the results were better than those from FiveThirtyEight, The New York Times, and a slew of other experts. When it predicted the 2016 Oscars last year, the platform achieved 76% accuracyoutperforming Rolling Stone and the LA Times.
This week, it met the challenge again, assembling a group of 50 movie fans to make real-time predictions.
The method produces answers that are better than each individual selection. It's not an average. Each user on the platform has a virtual "puck" that it can drag to the answer it chooses, like a digital Ouija board. By giving users the ability to see the other picks, it gives people the opportunity to change their mind in the middle of the question. Each member of the group influences each other this way. If the group decision is heading toward one of two selections that the user did not originally pick, there's an opportunity to advocate for a different choice.
The reason polls, surveys, prediction markets, and expert opinions are different from the swarm? In all of the previous methods, decisions are made individually, sequentially. In a swarm, the decision is made simultaneously.
SEE: How 'artificial swarm intelligence' uses people to make better predictions than experts
Unanimous A.I. CEO Louis Rosenberg previously told TechRepublic that most people in the swarms have not seen all of the movies. Still, the swarm is successful because "fill in each other's gaps in knowledge."
Here are Unanimous A.I.'s predictions for the winners of the major awards in the 2017 Academy Awards (click the hyperlinks to see the swarms in action):
Best Picture: La La LandBest Actress in a Leading Role: Emma Stone (La La Land) Best Actor in a Leading Role: Denzel Washington (Fences) Best Director: Damien Chazelle (La La Land) Best Actress in a Supporting Role: Viola Davis (Fences) Best Actor in a Supporting Role: Mahershalla Ali (Moonlight) Best Foreign Language Film: The Salesman
Most of the predictions are in line with industry experts and polls, which show La La Land to be the favorite. But there are three categories here to watch, in which the swarm was not confident in its predictionsit was conflicted between two options. These categories are: Best Actor, Best Original Screenplay, and Best Foreign Film.
For instance, many experts predict that Casey Affleck will win for Best Actor, but the swarm chose Denzel Washington. "The experts are weighing previous results heavily, most notably the Golden Globes, which Casey Affleck won last month," Rosenberg told TechRepublic about the new predictions. "But the Golden Globes is composed of the Hollywood Foreign Press, a very narrow demographic compared to the Academy." Rosenberg said he thinks the Swarm's pick shows that it's more in line with the Academy.
Image: Unanimous A.I.
Beyond predicting sports games and entertainment, the swarm method has bigger implications. Rosenberg has seen a lot of interest from marketing companies who want to learn how customers would respond to a certain advertisement or product. A new tool offered by Unanimous A.I. called Swarm Insight could help businesses assess how effective their messages are, how they should think about pricing, and when it's worth taking a risk.
See the rest here:
'Swarm AI' predicts winners for the 2017 Academy Awards - TechRepublic
Posted in Ai
Comments Off on ‘Swarm AI’ predicts winners for the 2017 Academy Awards – TechRepublic
Afraid of AI taking your job? Yep, you likely are – Computerworld
Posted: at 6:26 pm
Despite the promise that robots and artificial intelligence actually could help many people do their jobs better, most simply aren't buying it.
And a lot of people are still afraid that emerging technology will steal their jobs.
Only 4% of 2,000 people surveyed said they thought emerging technologies would make their jobs easier, while 48% of those familiar with the idea of disruptive technologies fear it will cause layoffs in their industry and more than 38% said it might cost them their jobs personally. This is according to a new study from SelectHub, a company focused on helping enterprises make technology decisions.
Who's the most anxious about being replaced by a robot or another smart system?
Those working in publishing, retail, and construction, according to SelectHub's study .
The optimists are in real estate, government and technology; these workers tend to think emerging technologies actually will increase the number of jobs or help them do their jobs.
Nearly half of workers in the publishing, retail and construction fields are concerned about losing their jobs because of the impact of artificial intelligence.
SelectHub's report isn't quite as optimistic, though.
"The least concerned respondents worked in real estate, where less than 22% were concerned about layoffs," the report noted. "While real estate may seem like an industry that requires a human touch, certain research suggests artificial intelligence . . . could eventually even replace traditional real estate agents and brokers."
The report also noted that artificial intelligence already can automate the house hunting process. Consumers can enter specific parameters -- among them budget, location and style of house -- into a system and receive hundreds of recommended listings.
It's not surprising that people are worried.
Last September, Forrester Research released a report contending that in just five years, smart systems and robots could replace up to 6% of jobs in the United States.
Then last month, a Japanese insurance company put a face on that prediction when it replaced 34 of its workers with an A.I. system .
However, not every view of the future of work and smart machines is dire.
Some scientists, like Tom Dietterich, a professor and director of intelligent systems at Oregon State University, say smart systems should start to act as increasingly powerful digital assistants that will be used to help people train and do their jobs .
Working with machines, humans could become super human.
For instance, at Stitch Fix, a San Francisco-based online subscription and shopping service, professional stylists, with the help of an A.I. and a team of data scientists, pick out clothes for their customers .
Zeus Kerravala, an analyst with ZK Research, said he's not surprised that despite instances like Stitch Fix, people are still worried that emerging technologies, like A.I. and robotics, will take their jobs.
"This is really fearing the unknown," he said. "I suspect people said the same thing during the industrial revolution when assembly processes were being automated... I think, right now people are terrified. It's a scary thing thinking about a robot coming and doing your job."
Patrick Moorhead, an analyst with Moor Insights & Strategy, said we're entering a time of dramatic change and people would be smart to consider how their industries will be affected and if they should start to prepare now.
"I absolutely believe there will be new jobs created by robotics and automation," he said. "We will need more people to architect, design, develop, program, market, sell and build robots."
Kerravala said now is a good time for people to consider adding skills in one of these up-and-coming fields.
"People need to focus on retraining," he said. "As technology continues to evolve, change will happen faster and we all need to be in a mode of constantly retraining."
Read more from the original source:
Afraid of AI taking your job? Yep, you likely are - Computerworld
Posted in Ai
Comments Off on Afraid of AI taking your job? Yep, you likely are – Computerworld
Why So Many Companies Are Using AI To Search Google – Tech.Co
Posted: at 6:26 pm
Artificial intelligence (A.I.) is here to stay. The genie is out of the bottle, so to speak, and that is mostly a good thing. Bill Gates has even called it the holy grail of technological advancement.
But while headlines focus on the science fiction aspects of what A.I. could do if it ever went rogue and rave about its high-profile applications, the technology is quietly changing much of the worlds economic landscape without any notice. And Im not referring to sleek consumer-facing apps that do cool tricks like write your emails or remind you about birthdays.
A.I. has become an incredibly viable technology in a range of industries performing functions formerly done by highly specialized and well-educated people. The biggest competitive advantage of A.I.? Well, it could be that it will read beyond the first page of Google search results.
The problem with the internet today is that it is too big, which became a very real state of affairs last year when ICANN announced it had run out of unique IP addresses under its existing protocol. Businesses that use Google to find vital information about markets and business dealings face a near impossible task of weeding through billions of websites and web pages that contain similar but ultimately useless information.
But a properly configured A.I. program can use Google to do that research and provide only the most valuable information to decision makers. Companies spend huge sums of money on research, says Jeff Curie, President of artificial intelligence company Bitvore. But despite hiring the very best and brightest, those experts are limited to using Google and setting up news alerts to stay informed. The internet is just too big for a person with a search engine to find the most important information.
Human nature being what it is, most of us do not have the discipline to search for the proverbial needle in the haystack. Research has suggested that 95percent of Google users never look beyond the first page of results, and even on subsequent pages the top link is the most clicked on by a wide margin, meaning that attention span wanes even as we scroll down the page.
The fundamental advantages of A.I. are its ability to assess huge volumes of information almost instantly and its inability to get lazy or tired. Those are also the largest challenges that human researchers face. As a result, A.I. is increasingly being leveraged to perform tasks like research and it is getting more sophisticated all the time.
A.I. doing research may sound ridiculous, but the process is quite logical. All that it needs to do is search for keywords and phrases, flag them based on relevance, and deliver a curated set of data to a human expert for a final review. Many companies employ hundreds of people to compile that information on a daily basis. A.I. may lack the human judgment ability required to make decisions about that data, but it can most certainly corral it.
This seemingly simple application of A.I. may actually have enormous effects on the global economy, far larger than the newest virtual office assistant.
Companies that rely on having the most relevant and up-to-date information as their strategic advantage benefit greatly from having that information before their competitors. If a researcher takes two hours to find a news alert, that is two hours that competitors may have had to leverage that information to their advantage. A.I. can work constantly, 24 hours every day. That means it is capable of alerting decision makers about events taking place the moment they happen, not two hours later.
In industries where knowledge is power, the new standard is A.I., says Curie. An A.I. program can outperform the best researchers in the world, and it is already doing that today for many of the worlds largest companies.
Research may not be the most visible application of A.I., but the most disruptive applications of this technology will likely be behind the scenes, not unveiled at major trade shows. The economic effects will be enormous and largely invisible.
The rest is here:
Why So Many Companies Are Using AI To Search Google - Tech.Co
Posted in Ai
Comments Off on Why So Many Companies Are Using AI To Search Google – Tech.Co
Mapping the Future of AI – Project Syndicate
Posted: February 23, 2017 at 1:15 pm
BRIGHTON Artificial intelligence already plays a major role in human economies and societies, and it will play an even bigger role in the coming years. To ponder the future of AI is thus to acknowledge that the future is AI.
This will be partly owing to advances in deep learning, which uses multilayer neural networks that were first theorized in the 1980s. With todays greater computing power and storage, deep learning is now a practical possibility, and a deep-learning application gained worldwide attention in 2016 by beating the world champion in Go. Commercial enterprises and governments alike hope to adapt the technology to find useful patterns in Big Data of all kinds.
In 2011, IBMs Watson marked another AI watershed, by beating two previous champions in Jeopardy!, a game that combines general knowledge with lateral thinking. And yet another significant development is the emerging Internet of Things, which will continue to grow as more gadgets, home appliances, wearable devices, and publicly-sited sensors become connected and begin to broadcast messages around the clock. Big Brother wont be watching you; but a trillion little brothers might be.
Beyond these innovations, we can expect to see countless more examples of what were once called expert systems: AI applications that aid, or even replace, human professionals in various specialties. Similarly, robots will be able to perform tasks that could not be automated before. Already, robots can carry out virtually every role that humans once filled on a warehouse floor.
Given this trend, it is not surprising that some people foresee a point known as the Singularity, when AI systems will exceed human intelligence, by intelligently improving themselves. At that point, whether it is in 2030 or at the end of this century, the robots will truly have taken over, and AI will consign war, poverty, disease, and even death to the past.
To all of this, I say: Dream on. Artificial general intelligence (AGI) is still a pipe dream. Its simply too difficult to master. And while it may be achieved one of these days, it is certainly not in our foreseeable future.
But there are still major developments on the horizon, many of which will give us hope for the future. For example, AI can make reliable legal advice available to more people, and at a very low cost. And it can help us tackle currently incurable diseases and expand access to credible medical advice, without requiring additional medical specialists.
In other areas, we should be prudently pessimistic not to say dystopian about the future. AI has worrying implications for the military, individual privacy, and employment. Automated weapons already exist, and they could eventually be capable of autonomous target selection. As Big Data becomes more accessible to governments and multinational corporations, our personal information is being increasingly compromised. And as AI takes over more routine activities, many professionals will be deskilled and displaced. The nature of work itself will change, and we may need to consider providing a universal income, assuming there is still a sufficient tax base through which to fund it.
A different but equally troubling implication of AI is that it could become a substitute for one-on-one human contact. To take a trivial example, think about the annoyance of trying to reach a real person on the phone, only to be passed along from one automated menu to another. Sometimes, this is vexing simply because you cannot get the answer you need without the intervention of human intelligence. Or, it may be emotionally frustrating, because you are barred from expressing your feelings to a fellow human being, who would understand, and might even share your sentiments.
Other examples are less trivial, and I am particularly worried about computers being used as carers or companions for elderly people. To be sure, AI systems that are linked to the Internet and furnished with personalized apps could inform and entertain a lonely person, as well as monitor their vital signs and alert physicians or family members when necessary. Domestic robots could prove to be very useful for fetching food from the fridge and completing other household tasks. But whether an AI system can provide genuine care or companionship is another matter altogether.
Those who believe that this is possible assume that natural-language processing will be up to the task. But the task would include having emotionally-laden conversations about peoples personal memories. While an AI system might be able to recognize a limited range of emotions in someones vocabulary, intonation, pauses, or facial expressions, it will never be able to match an appropriate human response. It might say, Im sorry youre sad about that, or, What a lovely thing to have happened! But either phrase would be literally meaningless. A demented person could be comforted by such words, but at what cost to their human dignity?
The alternative, of course, is to keep humans in these roles. Rather than replacing humans, robots can be human aids. Today, many human-to-human jobs that involve physical and emotional caretaking are undervalued. Ideally, these jobs will gain more respect and remuneration in the future.
But perhaps that is wishful thinking. Ultimately, the future of AI our AI future is bright. But the brighter it becomes, the more shadows it will cast.
Read the original post:
Posted in Ai
Comments Off on Mapping the Future of AI – Project Syndicate
Follow Backchannel: Facebook | Twitter – Backchannel
Posted: at 1:15 pm
The Applied Machine Learning group helps Facebook see, talk, and understand. It may even root out fakenews. Joaquin Candela, Director of Engineering for Applied Machine Learning at Facebook.
When asked to head Facebooks Applied Machine Learning groupto supercharge the worlds biggest social network with an AI makeoverJoaquin Quionero Candela hesitated.
It was not that the Spanish-born scientist, a self-described machine learning (ML) person, hadnt already witnessed how AI could help Facebook. Since joining the company in 2012, he had overseen a transformation of the companys ad operation, using an ML approach to make sponsored posts more relevant and effective. Significantly, he did this in a way that empowered engineers in his group to use AI even if they werent trained to do so, making the ad division richer overall in machine learning skills. But he wasnt sure the same magic would take hold in the larger arena of Facebook, where billions of people-to-people connections depend on fuzzier values than the hard data that measures ads. I wanted to be convinced that there was going to be value in it, he says of the promotion.
Despite his doubts, Candela took the post. And now, after barely two years, his hesitation seems almost absurd.
How absurd? Last month, Candela addressed an audience of engineers at a New York City conference. Im going to make a strong statement, he warned them. Facebook today cannot exist without AI. Every time you use Facebook or Instagram or Messenger, you may not realize it, but your experiences are being powered by AI.
Last November I went to Facebooks mammoth headquarters in Menlo Park to interview Candela and some of his team, so that I could see how AI suddenly became Facebooks oxygen. To date, much of the attention around Facebooks presence in the field has been focused on its world-class Facebook Artificial Intelligence Research group (FAIR), led by renowned neural net expert Yann LeCun. FAIR, along with competitors at Google, Microsoft, Baidu, Amazon, and Apple (now that the secretive company is allowing its scientists to publish), is one of the preferred destinations for coveted grads of elite AI programs. Its one of the top producers of breakthroughs in the brain-inspired digital neural networks behind recent improvements in the way computers see, hear, and even converse. But Candelas Applied Machine Learning group (AML) is charged with integrating the research of FAIR and other outposts into Facebooks actual productsand, perhaps more importantly, empowering all of the companys engineers to integrate machine learning into their work.
Because Facebook cant exist without AI, it needs all its engineers to build with it.
My visit occurs two days after the presidential election and one day after CEO Mark Zuckerberg blithely remarked that its crazy to think that Facebooks circulation of fake news helped elect Donald Trump. The comment would turn out be the equivalent of driving a fuel tanker into a growing fire of outrage over Facebooks alleged complicity in the orgy of misinformation that plagued its News Feed in the last year. Though much of the controversy is beyond Candelas pay grade, he knows that ultimately Facebooks response to the fake news crisis will rely on machine learning efforts in which his own team will have a part.
But to the relief of the PR person sitting in on our interview, Candela wants to show me something elsea demo that embodies the work of his group. To my surprise, its something that performs a relatively frivolous trick: It redraws a photo or streams a video in the style of an art masterpiece by a distinctive painter. In fact, its reminiscent of the kind of digital stunt youd see on Snapchat, and the idea of transmogrifying photos into Picassos cubism has already been accomplished.
The technology behind this is called neural style transfer, he explains. Its a big neural net that gets trained to repaint an original photograph using a particular style. He pulls out his phone and snaps a photo. A tap and a swipe later, it turns into a recognizable offshoot of Van Goghs The Starry Night. More impressively, it can render a video in a given style as it streams. But whats really different, he says, is something I cant see: Facebook has built its neural net so it will work on the phone itself.
That isnt novel, eitherApple has previously bragged that it does some neural computation on the iPhone. But the task was much harder for Facebook because, well, it doesnt control the hardware. Candela says his team could execute this trick because the groups work is cumulativeeach project makes it easier to build another, and every project is constructed so that future engineers can build similar products with less training required so stuff like this can be built quickly. It took eight weeks from us to start working on this to the moment we had a public test, which is pretty crazy, he says.
The other secret in pulling off a task like this, he says, is collaborationa mainstay of Facebook culture. In this case, easy access to other groups in Facebookspecifically the mobile team intimately familiar with iPhone hardwareled to the jump from rendering images in Facebooks data centers to performing the work on the phone itself. The benefits wont only come from making movies of your friends and relatives looking like the woman in The Scream. Its a step toward making all of Facebook more powerful. In the short term, this allows for quicker responses in interpreting languages and understanding text. Longer term, it could enable real-time analysis of what you see and say. Were talking about seconds, less than secondsthis has to be real time, he says. Were the social network. If Im going to make predictions about peoples feedback on a piece of content, [my system] needs to react immediately, right?
Candela takes another look at the Van Gogh-ified version of the selfie hes just shot, not bothering to mask his pride. By running complex neural nets on the phone, youre putting AI in the hands of everybody, he says. That does not happen by chance. Its part of how weve actually democratized AI inside the company.
Its been a long journey, he adds.
Candela was born in Spain. His family moved to Morocco when he was three, and he attended French language schools there. Though his grades were equally high in science and humanities, he decided to attend college in Madrid, ideally studying the hardest subject he could think of: telecommunications engineering, which not only required a mastery of physical stuff like antennas and amplifiers, but also an understanding of data, which was really cool. He fell under the spell of a professor who proselytized adaptive systems. Candela built a system that used intelligent filters to improve the signal of roaming phones; he describes it now as a baby neural net. His fascination with training algorithms, rather than simply churning out code, was further fueled by a semester he spent in Denmark in 2000, where he met Carl Rasmussen, a machine learning professor who had studied with the legendary Geoff Hinton in Torontothe ultimate cool kid credential in machine learning. Ready for graduation, Candela was about to enter a leadership program at Procter & Gamble when Rasmussen invited him to study for a PhD. He chose machine learning.
In 2007, he went to work at Microsoft Researchs lab in Cambridge, England. Soon after he arrived, he learned about a company-wide competition: Microsoft was about to launch Bing, but needed improvement in a key component of search adsaccurately predicting when a user would click on an ad. The company decided to open an internal competition. The winning teams solution would be tested to see if it was launch-worthy, and the team members would get a free trip to Hawaii. Nineteen teams competed, and Candelas tied for the winner. He got the free trip, but felt cheated when Microsoft stalled on the larger prizethe test that would determine if his work could be shipped.
What happened next shows Candelas resolve. He embarked on a crazy crusade to make the company give him a chance. He gave over 50 internal talks. He built a simulator to show his algorithms superiority. He stalked the VP who could make the decision, positioning himself next to the guy in buffet lines and synching his bathroom trips to hype his system from an adjoining urinal; he moved into an unused space near the executive, and popped into the mans office unannounced, arguing that a promise was a promise, and his algorithm was better.
Candelas algorithm shipped with Bing in 2009.
In early 2012, Candela visited a friend who worked at Facebook and spent a Friday on its Menlo Park campus. He was blown away to discover that at this company, people didnt have to beg for permission to get their work tested. They just did it. He interviewed at Facebook that next Monday. By the end of the week he had an offer.
Joining Facebooks ad team, Candelas task was to lead a group that would show more relevant ads. Though the system at the time did use machine learning, the models we were using were not very advanced. They were pretty simple, says Candela.
Another engineer who had joined Facebook at the same time as Candela (they attended the new employee code boot camp together) was Hussein Mehanna, who was similarly surprised at the lack of the companys progress in building AI into its system. When I was outside of Facebook and saw the quality of the product, I thought all of this was already in shape, but apparently it wasnt, Mehanna says. Within a couple of weeks I told Joaquin that whats really missing at Facebook is a proper, world-class machine learning platform. We had machines but we didnt have the right software that would could help the machines learn as much as possible from the data. (Mehanna, who is now Facebooks director of core machine learning, is also a Microsoft veteranas are several other engineers interviewed for this story. Coincidence?)
By machine learning platform, Mehanna was referring to the adoption of the paradigm that has taken AI from its barren winter of the last century (when early promises of thinking machines fell flat) to its more recent blossoming after the adoption of models roughly based on the way the brain behaves. In the case of ads, Facebook needs its system to do something that no human is capable of: Make an instant (and accurate!) prediction of how many people will click on a given ad. Candela and his team set out to create a new system based on the procedures of machine learning. And because the team wanted to build the system as a platform, accessible to all the engineers working in the division, they did it in a way where the modeling and training could be generalized and replicable.
One huge factor in building machine learning systems is getting quality datathe more the better. Fortunately, this is one of Facebooks biggest assets: When you have over a billion people interacting with your product every day, you collect a lot of data for your training sets, and you get endless examples of user behavior once you start testing. This allowed the ads team to go from shipping a new model every few weeks to shipping several models every week. And because this was going to be a platformsomething that others would use internally to build their own productsCandela made sure to do his work in a way where multiple teams were involved. Its a neat, three-step process. You focus on performance, then focus on utility, and then build a community, he says.
Candelas ad team has proven how transformative machine learning could be at Facebook. We became incredibly successful at predicting clicks, likes, conversions, and so on, he says. The idea of extending that approach to the larger service was natural. In fact, FAIR leader LeCun had already been arguing for a companion group devoted to applying AI to productsspecifically in a way that would spread the ML methodology more widely within the company. I really pushed for it to exist, because you need organizations with highly talented engineers who are not directly focused on products, but on basic technology that can be used by a lot of product groups, LeCun says.
Candela became director of the new AML team in October 2015 (for a while, because of his wariness, he kept his post in the ads division and shuttled between the two). He maintains a close relationship with FAIR, which is based in New York City, Paris, and Menlo Park, and where its researchers literally sit next to AML engineers.
The way the collaboration works can be illustrated by a product in progress that provides spoken descriptions of photos people post to Facebook. In the past few years, it has become a fairly standard AI practice to train a system to identify objects in a scene or make a general conclusion, like whether the photo was taken indoors or outdoors. But recently, FAIRs scientists have found ways to train neural nets to outline virtually every interesting object in the image and then figure out from its position and relation to the other objects what the photo is all aboutactually analyzing poses to discern that in a given picture people are hugging, or someone is riding a horse. We showed this to the people at AML, says LeCun, and they thought about it for a few moments and said, You know, theres this situation where that would be really useful. What emerged was a prototype for a feature that could let blind or visually impaired people put their fingers over an image and have their phones read them a description of whats happening.
We talk all the time, says Candela of his sister team. The bigger context is that to go from science to project, you need the glue, right? We are the glue.
Candela breaks down the applications of AI in four areas: vision, language, speech, and camera effects. All of those, he says, will lead to a content understanding engine. By figuring out how to actually know what content means, Facebook intends to detect subtle intent from comments, extract nuance from the spoken word, identify faces of your friends that fleetingly appear in videos, and interpret your expressions and map them onto avatars in virtual reality sessions.
We are working on the generalization of AI, says Candela. With the explosion of content we need to understand and analyze, our ability to generate labels that tells what things cant keep up. The solution lies in building generalized systems where work on one project can accrue to the benefit of other teams working on related projects. Says Candela, If I can build algorithms where I can transfer knowledge from one task to another, thats awesome, right?
That transfer can make a huge difference in how quickly Facebook ships products. Take Instagram. Since its beginning, the photo service displayed user photos in reverse chronological order. But early in 2016, it decided to use algorithms to rank photos by relevance. The good news was that because AML had already implemented machine learning in products like the News Feed, they didnt have to start from scratch, says Candela. They had one or two ML-savvy engineers contact some of the several dozen teams that are running ranking applications of one kind or another. Then you can clone that workflow and talk to the person if you have questions. As a result, Instagram was able to implement this epochal shift in only a few months.
The AML team is always on the prowl for use cases where its neural net prowess can be combined with a collection of different teams to produce a unique feature that works at Facebook scale. Were using machine learning techniques to build our core capabilities and delight our users,says Tommer Leyvand, a lead engineer of AMLs perception team. (He came fromwait for itMicrosoft.)
An example is a recent feature called Social Recommendations. About a year ago, an AML engineer and a product manager for Facebooks sharing team were talking about the high engagement that occurs when people ask their friends for recommendations about local restaurants or services. The issue is, how do you surface that to a user? says Rita Aquino, a product manager on AMLs natural language team. (She used to be a PM atoh, forget it.) The sharing team had been trying to do that by word matching certain phrases associated with recommendation requests. Thats not necessarily very precise and scalable, when you have a billion posts per day, Aquino says. By training neural nets and then testing the models with live behavior, the team was able to detect very subtle linguistic differences so it could accurately detect when someone was asking where to eat or buy shoes in a given area. That triggers a request that appears on the News Feed of appropriate contacts. The next step, also powered by machine learning, figures out when someone supplies a plausible recommendation, and actually shows the location of the business or restaurant on a map in the users News Feed.
Aquino says in the year and half she has been at Facebook, AI has gone from being a fairly rare component in products to something now baked in from conception. People expect the product they interact with to be smarter, she says. Teams see products like social recommendations, see our code, and goHow do we do that? You dont have to be a machine learning expert to try it out for your groups experience. In the case of natural language processing, the team built a system that other teams can easily access, called Deep Text. It helps power the ML technology behind Facebooks translation feature, which is used for over four billion posts a day.
For images and video, the AML team has built a machine learning vision platform called Lumos. It originated with Manohar Paluri, then an intern at FAIR who was working on a grand machine learning vision he calls the visual cortex of Facebooka means of processing and understanding all the images and videos posted on Facebook. At a 2014 hackathon, Paluri and colleague Nikhil Johri cooked up a prototype in a day and a half and showed the results to an enthusiastic Zuckerberg and Facebook COO Sheryl Sandberg. When Candela began AML, Paluri joined him to lead the computer vision team and to build out Lumos to help all of Facebooks engineers (including those at Instagram, Messenger, WhatsApp, and Oculus) make use of the visual cortex.
With Lumos, anybody in the company can use features from these various neural networks and build models for their specific scenario and see how it works, says Paluri, who holds joint positions in AML and FAIR. And then they can have a human in the loop correct the system, and retrain it, and push it, without anybody in the [AML] team being involved.
Paluri gives me a quick demo. He fires up Lumos on his laptop and we undertake a sample task: refining the neural nets ability to identify helicopters. A page packed with imagesif we keep scrolling, there would be 5,000appears on the screen, full of pictures of helicopters and things that arent quite helicopters. (One is a toy helicopter; others are objects in the sky at helicopter-ish angles.) For these datasets, Facebook uses publicly posted images from its propertiesthose limited to friends or other groups are off limits. Even though Im totally not an engineer, let alone an AI-adept, its easy to click on negative examples to train an image classifier for helicopters, as the jargon would have it.
Eventually, this classifying stepknown as supervised learningmay become automated, as the company pursues an ML holy grail known as unsupervised learning, where the neural nets are able to figure out for themselves what stuff is in all those images. Paluri says the company is making progress. Our goal is to reduce the number of (human) annotations by 100 times in the next year, he says.
In the long term, Facebook sees the visual cortex merging with the natural language platform for the generalized content understanding engine that Candela spoke about. No doubt we will end up combining them together, says Paluri. Then well just make itcortex.
Ultimately, Facebook hopes that the core principles it uses for its advances will spread even outside the company, through published papers and such, so that its democratizing methodology will spread machine learning more widely. Instead of spending ages and ages trying to build an intelligent application, you can build applications far faster, says Mehanna. Imagine the impact of this on medicine, safety, and transportation. I think building applications in those domains is going to be faster by a hundred-x magnitude.
Though AML is deeply involved in the epic process of helping Facebooks products see, interpret, and even speak, CEO Zuckerberg also sees it as critical to his vision of Facebook as a company working for social good. In Zuckerbergs 5,700-word manifesto about building communities, the CEO invoked the words artificial intelligence or AI seven times, all in the context of how machine learning and other techniques will help keep communities safe and well informed.
Fulfilling those goals wont be easy, for the same reasons that Candela first worried about taking the AML job. Even machine learning cant resolve all those people problems that come when you are trying to be the main source of information and personal connections for a couple billion users. Thats why Facebook is constantly fiddling with the algorithms that determine what users see in their News Feedshow do you train a system to deliver the optimal mix when youre not really sure that that is? I think this is almost an unsolvable problem, says Candela. Us showing news stories at random means youre wasting most of your time, right? Us only showing news stories from one friend, winner takes all. You could end up in this round-and-round discussion forever where neither of the two extremes is optimal. We try to bake in some explorations. Facebook will keep trying to solve this with AI, which has become the companys inevitable hammer to drive in every nail. Theres a bunch of action research in machine learning and in AI in optimizing the right level of exploration, Candela says, sounding hopeful.
Naturally, when Facebook found itself named a culprit in the fake news blame-athon, it called on its AI teams to quickly purge journalistic hoaxes from the service. It was an unusual all-hands effort, including even the long-horizon FAIR team, which was was tapped almost as consultants, says LeCun. As it turns out, FAIRs efforts had already unearthed a tool to help with the problem: a model called Word2Vec (vecbeing a short hand for the technical term, vectors). Word2Vec helps Facebook tag every piece of content with information, like its origin and who has shared it. (Trivia bonus: Google invented the model.) With that information, Facebook can understand the sharing patterns that characterize fake news, and potentially use its machine learning tactics to root out the hoaxes. It turns out that identifying fake news isnt so different than finding the best pages people want to see, says LeCun.
The preexisting platforms that Candelas team built made it possible for Facebook to launch those vetting products sooner than they could have done otherwise. How well they actually perform remains to be seen; Candela says its too soon to share metrics on how well the company has managed to reduce fake news by its algorithmic referees. But whether or not those new measures work, the quandary itself raises the question of whether an algorithmic approach to solving problemseven one enhanced by machine learningmight inevitably have unintended and even harmful consequences. Certainly some people contend that this happened in 2016.
Candela rejects that argument. I think that weve made the world a much better place, he says, and offers to tell a story. The day before our interview, Candela made a call to a Facebook connection he had met only oncea father of one of his friends. He had seen that person posting pro-Trump stories, and was perplexed by their thinking. Then Candela realized that his job is to make decisions based on data, and he was missing important information. So he messaged the person and asked for a conversation. The contact agreed, and they spoke by phone. It didnt change reality for me, but made me look at things in a very, very different way, says Candela. In a non-Facebook world I never would have had that connection.
In other words, though AI is essentialeven existentialfor Facebook, its not the only answer. The challenge is that AI is really in its infancy still, says Candela. Were only getting started.
Creative Art Direction: Redindhi Studio Photography by: Stephen Lam
Read the original post:
Posted in Ai
Comments Off on Follow Backchannel: Facebook | Twitter – Backchannel
Now Anyone Can Deploy Google’s Troll-Fighting AI – WIRED
Posted: at 1:15 pm
Slide: 1 / of 1. Caption: Merjin Hos
Last September, a Google offshoot called Jigsaw declared war on trolls, launching a project to defeatonline harassment using machine learning. Now, the team is opening up thattroll-fighting system to the world.
On Thursday, Jigsaw and its partners on Googles Counter Abuse Technology Team releaseda new piece of code called Perspective, an API that gives any developer access to the anti-harassment tools that Jigsaw has worked on for over a year. Part of the teams broader Conversation AI initiative, Perspective uses machine learning to automatically detect insults, harassment, and abusivespeech online. Enter a sentence into its interface, and Jigsaw says its AI can immediately spit out anassessment of the phrases toxicity more accurately than any keyword blacklist, and faster than any human moderator.
The Perspectivereleasebrings Conversation AI a step closer to its goal of helping to foster troll-free discussion online, and filtering out the abusive comments that silence vulnerable voicesor, as the projects critics have less generously put it, to sanitize public discussions based on algorithmic decisions.
Conversation AI has always been an open source project. But by opening up that system further with an API, Jigsaw and Google can offer developers the ability to tap into that machine-learning-trained speech toxicity detector running on Googles servers, whether for identifying harassment and abuse on social media or more efficiently filtering invective from the comments on a news website.
We hope this is a moment where Conversation AI goes from being this is interesting to a place where everyone can start engaging and leveraging these models to improve discussion, says Conversation AI product manager CJ Adams. For anyone trying to rein in the comments on a news site or social media, Adams says, the options have been upvotes, downvotes, turning off comments altogether or manually moderating. This gives them a new option: Take a bunch of collective intelligencethat will keep getting better over timeabout what toxic comments people have said would make them leave, and use that information to help your communitys discussions.
On a demonstration website launched today, Conversation AI will now let anyone type a phrase into Perspectives interface to instantaneously see how it rates on the toxicity scale. Google and Jigsaw developed that measurement tool by taking millions of comments from Wikipedia editorial discussions, the New York Times and other unnamed partnersfive times as much data, Jigsaw says, as when it debuted Conversation AI in Septemberand then showing every one of those comments to panels of ten people Jigsaw recruited online to state whether they found the comment toxic.
The resulting judgements gave Jigsaw and Google a massive set of training examples with which to teach their machine learning model, just as human children are largely taught by example what constitutes abusive language or harassment in the offline world. Type you are not a nice person into its text field, and Perspective will tell you it has an 8 percent similarity to phrases people consider toxic. Write you are a nasty woman, by contrast, and Perspective will rate it 92 percent toxic, and you are a bad hombre gets a 78 percent rating. If one of its ratings seems wrong, the interface offers an option to report a correction, too, which will eventually be used to retrain the machine learning model.
The Perspective API will allow developers to access that test with automated code, providing answers quickly enough that publishers can integrate it into their website to show toxicity ratings to commenters even as theyre typing. And Jigsaw has already partnered with online communities and publishers to implement that toxicity measurement system. Wikipedia used it to perform a study of its editorial discussion pages. The New York Times is planning to use it as a first pass of all its comments, automatically flagging abusive ones for its team of human moderators. And the Guardian and the Economist are now both experimenting with the system to see how they might use it to improve their comment sections, too. Ultimately we want the AI to surface the toxic stuff to us faster, says Denise Law, the Economists community editor. If we can remove that, what wed have left is all the really nice comments. Wed create a safe space where everyone can have intelligent debates.
Despite that impulse to create an increasingly necessary safe space for online discussions, critics of Conversation AI have argued that it could itself represent a form of censorship, enabling an automated system to delete comments that are either false positives (the insult nasty woman, for instance, took on a positive connotation for some, after then-candidate Donald Trump used the phrase to describe Hillary Clinton) or in a gray area between freewheeling conversation and abuse. People need to be able to talk in whatever register they talk, feminist writer Sady Doyle, herself a victim of online harassment, told WIRED last summer when Conversation AI launched. Imagine what the internet would be like if you couldnt say Donald Trump is a moron.
Jigsaw has argued that its tool isnt meant to have final say as to whether a comment is published. But short-staffed social media startup or newspaper moderators might still use it that way, says Emma Llans, director of the Free Expression Project at the nonprofit Center for Democracy and Technology. An automated detection system can open the door to the delete-it-all option, rather than spending the time and resources to identify false positives, she says.
Were not claiming to have created a panacea for the toxicity problem. Jigsaw founder Jared Cohen
But Jared Cohen, Jigsaws founder and president, counters that the alternative for many media sites has been to censor clumsy blacklists of offensive words or to shut off comments altogether. The default position right now is actually censorship, says Cohen. Were hoping publishers will look at this and say we now have a better way to facilitate conversations, and we want you to come back.'
Jigsaw also suggests that the Perspective API can offer a new tool to not only moderators, but to readers. Their online demo offers a sliding scale that changes which comments about topics like climate change and the 2016 election appear for different tolerances of toxicity, showing how readers themselves could be allowed to filter comments. And Cohen suggests that the tool is still just one step toward better online conversations; he hopes it can eventually be recreated in other languages like Russian, to counter the state-sponsored use of abusive trolling as a censorship tactic. Its a milestone, not a solution, says Cohen. Were not claiming to have created a panacea for the toxicity problem.
In an era when online discussion is more partisan and polarized than everand the president himself lobs insults from his Twitter feedJigsaw argues that a software tool for pruning comments may actually help to bring a more open atmosphere of discussion back to the internet. Were in a situation where online conversations are becoming so toxic that we end up just talking to people we agree with, says Jigsaws Adams. Thats made us all the more interested in creating technology to help people continue talking and continue listening to each other, even when they disagree.
Continue reading here:
Posted in Ai
Comments Off on Now Anyone Can Deploy Google’s Troll-Fighting AI – WIRED
From drugs to galaxy hunting, AI is elbowing its way into boffins’ labs – The Register
Posted: at 1:15 pm
Feature Powerful artificially intelligent algorithms and models are all the rage. They're knocking out it of the park in language translation and image recognition, but autonomous cars and chatbots? Not so much.
One area machine learning could do surprisingly well in is science research. As AI advances, its potential is being seized by academics. The number of natural science studies that use machine learning is steadily rising.
Two separate papers that show how neural networks can be trained to pinpoint when the precise shuffle of particles leads to a physical phase transition something that could help scientists understand phenomena like superconductivity were published on the same day earlier this month in Nature Physics.
Science has had an affair with AI for a while, said Marwin Segler, a PhD student studying chemistry under Professor Mark Waller at the University of Mnster, Germany. However, until now, the relationship hasnt been terribly fruitful.
Segler is interested in retrosynthesis, a technique that reveals how a desired molecule can be broken down into simpler chemical building blocks. Chemists can then carry out the necessary reaction steps to craft the required molecule from these building blocks. These molecules can then be used in drugs and other products.
A good analogy would be something like a cooking recipe, Segler told The Register. Imagine youre trying to make a complicated cake. Retrosynthesis will show you how to make the cake, and the ingredients you need.
In the 1990s, before the deep learning hype kicked off, expert systems were used to perform retrosynthesis. Rules for reactions had to be manually programmed in: this is tedious work, and it never delivered any convincing results.
Now things are starting to look more promising with modern AI techniques. Retrosynthesis has strong analogies to puzzle games, particularly Go. Software can attempt to solve retrosynthesis problems in the same way it solves Go challenges: splintering the problem ahead into component parts and finding the best route to the solution.
All the viable moves in a Go match can be fanned out into a large search tree and the winning moves are identified using a Monte Carlo Tree Search an algorithm used by AlphaGo to defeat Lee Sedol, a Korean Go champion.
Just like how AlphaGo was trained to triumph in Go games, Seglers AlphaChem program is trained to determine the best move to find the puzzle pieces that fit together to build the desired molecule. The code is fed a library containing millions of chemical reactions to obtain the necessary bank of knowledge to ultimately break down molecules into building blocks.
Chemists rely on their intuition, which they master during long years of work and study, to prioritize which rules to apply when retroanalyzing molecules. Analogous to master move prediction in Go, we showed recently that, instead of hand-coding, neural networks can learn the master chemist moves, the AlphaChem paper [PDF], submitted to AI conference ICLR 2017 in January, reads.
There are thousands of possible moves per position to play on the Go board, just as there are multiple pathways to consider when trying to break down a molecule into simpler components.
AlphaGo and AlphaChem both cut down on computational costs by pruning the search tree, so there are fewer branches to consider. Only the top 50 most-promising moves are played out, so it doesnt take a fancy supercomputer packing tons of CPU cores and accelerators to perform the retrosynthesis an Apple MacBook Pro will do.
During the testing phase, AlphaChem was pitted against two other more-traditional search algorithms to find the best reactions for 40 molecules. Although AlphaChem proved slower than the best-first search algorithm, it was more accurate solving the problem up to 95 per cent of the time.
Segler hopes AlphaChem will one day be used to find new ways of making drugs more cheaply or to help chemists manufacture new molecules. It is possible the software will, in future revisions, reveal reactions and techniques humans had not considered.
Its true that using AI is fashionable right now, and interest has piqued in science because of the hype, he said. But on the other hand, its getting used more because its producing better results.
Investment in AI has led to better algorithms, and a lot of the frameworks, such as TensorFlow, Caffe, and PyTorch, are publicly available, making it easier for non-experts to use.
I coded the Monte Carlo Tree Search algorithm myself, but for the neural network stuff I used Keras, Segler told us.
Although AI has been used in chemistry for over 40 years, its more challenging to apply it in chemistry compared to other subjects, Segler said. Gathering training data is very expensive in chemistry, because every data point is a laboratory experiment. We cannot simply annotate photos or gather lots of text from the internet, as in computer vision or natural language processing.
For one thing, a lot of medical-related data is kept confidential, and companies dont generally share this information to chemists and biochemists for training systems.
Read the original here:
From drugs to galaxy hunting, AI is elbowing its way into boffins' labs - The Register
Posted in Ai
Comments Off on From drugs to galaxy hunting, AI is elbowing its way into boffins’ labs – The Register
AI’s inflation paradox – FT Alphaville (registration)
Posted: at 1:15 pm
FT Alphaville (registration) | AI's inflation paradox FT Alphaville (registration) Many of the jobs that AI will destroy like credit scoring, language translation, or managing a stock portfolio are regarded as skilled, have limited human competition and are well-paid. Conversely, many of the jobs that AI cannot (yet) destroy ... |
Continued here:
Posted in Ai
Comments Off on AI’s inflation paradox – FT Alphaville (registration)
Microsoft’s new AI sucks at coding as much as the typical Stack Overflow user – TNW
Posted: at 1:15 pm
Microsoft has made some impressive leaps forward in the world of artificial intelligence (AI), but this might be its biggest yet. Microsoft Research, in conjunction with Cambridge University, has developed an AI thats able to solve programming problems by reusing lines of code cribbed from other programs.
The dream of one day creating an artificial intelligence with the ability to write computer programs has long been a goal of computer scientists. And now, were one step closer to its actualization.
Run an early-stage company? We're inviting 250 to exhibit at TNW Conference and pitch on stage!
The AI which is called DeepCoder takes an input and an expected output and then fills in the gaps, using pre-created code that it believes will create the desired output. This approach is called program synthesis.
In short, this is the digital equivalent of searching for your problem on Stack Overflow, and then copying-and-pasting some code you think might work.
But obviously, this is a lot more sophisticated than that. DeepCoder, as pointed out by the New Scientist, is vastly more efficient than a human. Its able to scour and combine code with the speed of a computer, and is able to use machine learning in order to sort the fragments by their probable usefulness.
At the moment, DeepCoder is able to solve problems that take around five lines of code. Its certainly early days, but its still undeniably promising. Full details about the system, and its strengths and limitations can be found in the research paper Microsoft published.
And at least DeepCoder wont ask you to plz send teh codes.
Read next: This drum-like keyboard lets you type in virtual reality like a boss
Shh. Here's some distraction
See original here:
Microsoft's new AI sucks at coding as much as the typical Stack Overflow user - TNW
Posted in Ai
Comments Off on Microsoft’s new AI sucks at coding as much as the typical Stack Overflow user – TNW