Page 216«..1020..215216217218..230240..»

Category Archives: Ai

The First AI Bubble Is Now Here: Talking Speakers – Wall Street Journal (subscription)

Posted: July 27, 2017 at 10:27 am


Wall Street Journal (subscription)
The First AI Bubble Is Now Here: Talking Speakers
Wall Street Journal (subscription)
Smart speakers powered by artificial intelligence are one of the hottest consumer products in the U.S. Amazon.com Inc., Alphabet Inc.'s Google and Apple Inc. are all offering competing products. If they think that's a crowded field, they should come to ...

Go here to see the original:

The First AI Bubble Is Now Here: Talking Speakers - Wall Street Journal (subscription)

Posted in Ai | Comments Off on The First AI Bubble Is Now Here: Talking Speakers – Wall Street Journal (subscription)

Google launches its own AI Studio to foster machine intelligence … – TechCrunch

Posted: at 10:27 am

A new week brings a fresh Google initiative targeting AI startups. We started the month with the announcement of Gradient Ventures, Googles on-balance sheet AI investment vehicle. Two days later we watched the finalists of Google Clouds machine learning competition pitch to a panel of top AI investors. And today, Googles Launchpad is announcing a new hands-on Studio program to feed hungry AI startups the resources they need to get off the ground and scale.

The thesis is simple not all startups are created the same. AI startups love data and struggle to get enough of it. They often have to go to market in phases, iterating as new data becomes available. And they typically have highly technical teams and a dearth of product talent. You get the picture.

The Launchpad Studio aims to address these needs head-on with specialized data sets, simulation tools and prototyping assistance. Another selling point of the Launchpad Studio is that startups accepted will have access to Google talent, including engineers, IP experts and product specialists.

Launchpad, to date, operates in 40 countries around the world, explains Roy Geva Glasberg, Googles Global Lead for Accelerator efforts. We have worked with over 10,000 startups and trained over 2,000 mentors globally.

This core mentor base will serve as a recruiting pool for mentors that will assist the Studio.Barak Hachamov, board member for Launchpad, has been traveling around the world withGlasberg to identify new mentors for the program.

The idea of a startup studio isnt new. It has been attempted a handful of times in recent years, but seems to have finally caught on withAndy Rubins Playground Global. Playground offers startups extensive services and access to top talent to dial-in products and compete with the largest of tech companies.

On the AI Studio front, Yoshua Bengios Element AI raised a $102 million Series A to create a similar program. Bengio, one of, if not the, most famous AI researchers, can help attract top machine learning talent to enable recruiting parity with top AI groups like Googles DeepMind and Facebooks FAIR. Launchpad Studio wont have Bengio, but it will bringPeter Norvig, Dan Ariely, Yossi Matias and Chris DiBona to the table.

But unlike Playgrounds $300 million accompanying venture capital arm and Elements own coffers, Launchpad Studio doesnt actually have any capital to deploy. On one hand, capital completes the package. On the other, Ive never heard a good AI startup complain about not being able to raise funding.

Launchpad Studio sits on top of the Google Developer Launchpad network. The group has been operating an accelerator with global scale for some time now. Now on its fourth class of startups, the team has had time to flesh out its vision and build relationships with experts within Google to ease startup woes.

Launchpad has positioned itself as the Google global program for startups, asserts Glasberg. It is the most scaleable tool Google has today to reach, empower, train and support startups globally.

With all the resources in the world, Googles biggest challenge with its Studio wont be vision or execution but this doesnt guarantee everything will be smooth sailing. Between GV, Capital G, Gradient Ventures, GCP and Studio, entrepreneurs are going to have a lot of potential touch-points with the company.

On paper, Launchpad Studio is the Switzerland of Googles programs. It doesnt aim to make money or strengthen Google Clouds positioning. But from the perspective of founders, theres bound to be some confusion. In an ideal world we will see a meeting of the minds between Launchpads Glasberg, Gradients Anna Patterson and GCPs Sam OKeefe.

The Launchpad Studio will be based in San Francisco, with additional operations in Tel Aviv and New York City. Eventually Toronto, London, Bangalore and Singapore will host events locally for AI founders.

Applications to the Studio are now open if youre interested you can apply here.The program itself is stage-agnostic, so there are no restrictions on size. Ideally early and later-stage startups can learn from each other as they scale machine learning models to larger audiences.

See the original post:

Google launches its own AI Studio to foster machine intelligence ... - TechCrunch

Posted in Ai | Comments Off on Google launches its own AI Studio to foster machine intelligence … – TechCrunch

You need to assemble a crack AI team: Where do you even start? – The Register

Posted: at 10:27 am

AI is finding its way into every day business and government. The idea of AI is not a new, but what is different is that today's hardware and software is bringing the various concepts underpinning AI to a mass market.

Whats new, too, is the driver: from bots and digital assistants to autonomous vehicles Google, Microsoft, Facebook, Nvidia and others in Silicon Valley are setting a drum beat to which the rest of are marching.

Such is the drumbeat, IDC last year reckoned the AI market would be worth $47bn by 2020, up from $8bn in 2016, with those adopting it fastest in banking, retail, healthcare and discrete manufacturing. Nearly half that spend will go on software.

As business leaders ponder the impact on business models and what capabilities could perform better as a result of injection of AI, their IT managers are finding themselves with a fresh set of concerns: how to assemble a team who can deliver the types of AI be they bots or some kind of neural network that management wants.

Its falling to IT types to identify the skills and people to deliver them to turn their organisations AI vision into a reality.

Where do you start and who can you get? Its tricky, when you consider there are more openings in the AI than people to place them. A Paysa study this year reckoned there were 10,000 open AI positions at the worlds top 20 employers.

Forty per cent are open at companies with more than 10,000 staff, with 10 per cent at those whose employees number 1,001 and upwards.

Roles in demand include number-crunchers - the modern-day equivalent of data analysts; modellers who enjoy analyzing complex data sets; those specialized in deep learning to deal with enormous amounts of data and trying to pull out results, insights and possibilities; and, naturally, engineers to hack the thing together.

When it comes to language, perhaps this is the easiest box to tick.

Increasingly, R and Python tend to be the most commonly used programming languages in this area. Looking at a more hardware-optimized path, going down to the GPU? Then skills in C/C++ could be the ticket.

But data is where things get tricky, which is a challenge as AI is predicated on ML and ML eats data.

Software and service provider Amdocs reckons one answer is to turn to those to your team who already have experience in data and offer some re-training.

This is about retraining BI and data analysts but getting down to the nitty gritty of developing algorithms and that might sit outside of their comfort zone, says Doran Youngerwood, Amdocs head of digital and intelligence. Organisations that have access to fresh data in real time will be most successful. Before you talk intelligence, you need to focus on accessing data and finding complete data sets.

The make-up of teams really depends on the outcomes you are looking to achieve, warns Callum Adamson, founder of API specialist Distributed, which manages distributed AI teams on behalf of clients.

You need to mobilise around the jobs to be done and bring in the roles that allow you to do those jobs. You need to look at the outcomes and break it down, remembering that the best AI is narrow and deep.

Although very hard to find, it is best if the ML experts are also expert coders. Otherwise you may have contention between the algorithm folks and those who have to code it up, warns Hal Lonas, CTO at cybersecurity software company Webroot.

James Waterhouse is head of insight & data science at Sky Betting and Gaming. His team of three data scientists, a test engineer and an intern are tasked with modelling data to better understand churn and cross-sell opportunities.

I dont think theres a perfect data scientist that bridges the skills you need to make things work at scale in real time on a massive platform all while understanding the business. Dont try to find a data scientist unicorn, he warns. Id find three people and get them working together in a way that their skills rub off on each other, Waterhouse told The Register.

That need for collaboration is reiterated by James Poyser, co-founder and managing director of online accountancy software company inniAccounts, which last year won a Queens Award for Innovation for its application of Microsoft AzureML to apply AI to routine compliance tasks related to tax.

Our approach is 'a game of inches - its the sum of a lot of micro services that make a big difference to the user experience, he explains.

As such, Poyser says the combination of technology and collaboration is vital to getting AI projects off the ground successfully. If you approach AI as a technical function alone you wont succeed and youll alienate people and customers, Poyser warns.

The people involved have to evangelise its benefits within the company and educate colleagues so that everyone, no matter their role, can spot an opportunity to apply AI to improve how they work and how customers are served.

AI is an unknown unknown. We don't know what it can do, and we probably don't know where it can be applied. So there is a chain of people and skills that are necessary to getting AI working within a company, Poyser adds. But unless the skills work together you cant create a product that solves a persons problem accurately 90 per cent of the time.

Online retailer Ocado built and installed a system using Googles Tensorflow to do the AI heavy lifting on inbound customer emails at its call centre. The system, built using Python, C++ and Kubernetes and that runs on Google Compute opens and scans up to 2,000 emails on an ordinary day for key words and context, before prioritising and forwarding them. Email numbers double that at busy times such as Christmas.

Ocado spent almost a year building up its Poland-based data science team from scratch. Tim Bickley, team leader in the Ocado Technology data science team, says while a large proportion boast a mathematical background, the flavour of qualifications is less important than strong maths skills, a proven track record of independent research and problem solving, and solid programming skills.

We find the team benefits from having some people who are particularly strong in one area or another, but doesn't work so well if someone is outright weak in one, Bickley said.

AI is not a new field but the demand is meaning skills are in short supply and theres bidding war under way.

In the US, San Francisco at the top of the Silicon Valley is a city where employers are trawling most of AI-related skills. The shortage and the competition is pushing up salaries an average of $157,335 according to Paysa.

Webroots Lonas says: Much of the demand for these skills is coming from very high compensation companies and organisations, so its hard for small companies and startups to compete. My advice is to find one or two experienced experts, use them as the core of the team and then work with local educational institutions to find and fund programs.

Think about using internships, special projects, and growing a farm team. Think about hackathons and other non-traditional ways to find talent. Once you get critical mass, its easier because others will join knowing they can learn from your resident experts and add valuable experience to their resume and careers, Lonas adds.

Sky Betting and Gaming has forged strong relationships with Leeds and Lancaster universities offering students work placements. Waterhouse says this is helping to remove some of the risk from the AI recruitment process. Its useful in getting people in. You can see what theyre good at and it gives them an opportunity to get up to speed with our business.

Academia is a good place to start the hunt for ML experts particularly those with a scientific and engineering background but dont rule out the self-taught. Contributions to ML-related open source projects or published research can be good indicators of technical ability. But prepare to invest in some upskilling, regardless of their background.

Wael Elrifai, senior director of Enterprise Solutions at Pentaho and the companys AI and Machine Learning expert, is currently building a team of more than 20 engineers and data scientists. Having recognised that PhDs or Masters degrees in machine learning are virtually non-existent, Pentaho has turned to training company Pivigo, which specialises in turning PhDs and MScs into Data Scientists and bridging the skills from traditional STEM degree areas to data science, machine learning and AI.

Students have the opportunity during their training to apply what they learn by working on real projects. I recruited my last data scientist through a similar organisation and she is doing really valuable work for the team. She has a PhD in computational fluid dynamics, which has nothing to do with data science. After a four month conversion course, she now has strong practical knowledge in how to solve data science problems, Elrifai says.

Bearing in mind how quickly the ML and AI fields are evolving, a proven ability - and a desire - to quickly learn new technologies is almost more important than pre-existing experience for members of your team. PhDs are desirable but neither necessary nor sufficient; we've had great people without them and the occasional interviewee with them that made us wonder if they found it in a cereal packet, Bickley says.

So technology is the key right? Not quite and heres where things get tricky. If it was a matter of simply finding qualified or aspiring data scientists and associated experts that make the task of building an AI team if not completely simple then at least relatively clear. Paysa found while 35 per cent of the open AI positions in the US required a Ph.D level qualification, 26 per cent needed just a masters degree and 18 per cent a bachelors degree.

But whats akin to gold dust in this hunt is finding people who possess a deep understanding of wider business. It can be easy to get stuck in research-mode for a long time, and forget about the value of your work to the business. Your always need to make conscious decisions based on the data but also the cost/value analysis, says Ocado software development manager Roland Plaszowski.

A killer combination is app developers who understand how AI/ML can give their product the edge, but who also have the ability to effectively collaborate with product managers who are closer to the customer.

That will allow them to apply some intelligence to the usage data, learn about peoples habits and use that insight to develop a product that offers a smoother experience, says inniAccounts Poyser.

So the team is assembled, but its not a thing thats written in stone and the teams composition will evolve.

During the early stages of your project, its likely data will dominate as ML engineers and data scientists will operate a full stack of analysis. Data scraping, cleaning and management can often consume a huge amount of effort.

As the project matures, so the team will grow and more specialized roles emerge, for example, with the addition of data engineers who manage big data infrastructure such as Spark.

You'll also find that the team follows a pattern familiar in traditional IT, particularly software development and DevOps.

"The differences between generic projects and ML/AI projects are not so big. We work hard to make sure that tests, continuous integration, monitoring, automation and documentation are in the project from the beginning, just like any other software engineering project, Plaszowski said.

We'll be covering machine learning, AI and analytics and ethics at MCubed London in October. Full details, including early bird tickets, right here.

Sponsored: The Joy and Pain of Buying IT - Have Your Say

Originally posted here:

You need to assemble a crack AI team: Where do you even start? - The Register

Posted in Ai | Comments Off on You need to assemble a crack AI team: Where do you even start? – The Register

Mozilla is crowdsourcing voice recognition to make AI work for the people – The Verge

Posted: at 10:27 am

Data is critical to building great AI so much so, that researchers in the field compare it to coal during the Industrial Revolution. Those that have it will steam ahead. Those that dont will be left in the dust. In the current AI boom, its obvious who has it: tech giants like Google, Facebook, and Baidu.

Thats worrying news. After all, many of these companies have near monopolies in areas like search and social media. Their position helps them gather data, which helps them build better AI, which helps them stay ahead of rivals. For the firms themselves, its a virtuous cycle, but without viable competition, companies can and do abuse their dominance.

Now a new project from the Mozilla Foundation (the nonprofit creator of the Firefox browser) is experimenting with an alternative to data monopolies, by asking users to pool information in order to power open-sourced AI initiatives. The companys first project is called Common Voice, with the Mozilla foundation asking volunteers to donate vocal samples to build an open-source voice recognition system like the ones powering Siri and Alexa.

the power to control speech recognition could end up in just a few hands.

Currently, the power to control speech recognition could end up in just a few hands, and we didnt want to see that, Sean White, vice president of emerging technology at Mozilla, tells The Verge. He says to get data, the big companies can just filter everything coming in, but for other players, there needs to be other methods. The interesting question for us, is, can we do it so the people who are creating the data also benefit? he asks.

At the moment, Mozilla is just collecting data, but plans to have its open-source voice recognition available by the end of the year. (Will it go in the Firefox browser? White wont say, but adds: We have some experiments planned [for that].) Currently, anyone can go to the Common Voice website and donate their voice by reading out sample sentences. They can also supply biographical information like age, location, gender, and accent. This information will help Mozilla avoid bias in creating its voice recognition systems, says White, and ensure that the technology can handle accents something Google and Apple still struggle with.

Frederike Kaltheuner, a researcher at Privacy International, says these firms often use AI as a pretext for scooping up valuable personal data, telling users it will enable them to improve certain services. This may be true, she says, but the consequences of sharing this data for society at large is less clear. There is [often] a fundamental conflict of interest between what you need as a citizen, and what is in that companys interest, says Kaltheuner.

What can open-source data offer that companies cant?

So how does an initiative like Common Voice lure users away from existing and admittedly convenient services? After all, open-source projects have been around for longer than the internet, but with a few exceptions, they have been unable to compete with commercial products. They simply dont offer a comparable service.

For Mozilla, the answer is personalization. After all, while AI systems trained on population-sized datasets tend to be good enough for the average individual, they often fail when it comes to serving the needs of smaller groups, or those not represented in their data. (More often than not, the data is just biased toward white males, the industry default.)

For us to be successful with data commons, there has to be a motivation [for users] other than realizing one day that theyve been giving away all their personal data, says White. We have to make their experience better because theyve participated. In the case of Common Voice, White wants as much accent data as possible to improve voice recognition for these individuals. We want the system to work better for you because some of your data is included, he says.

Offering personalization in exchange for data is a neat proposition, but its not a silver bullet for those fighting data monopolies. For a start, big firms could make similar offers of their own to users. (Alexa doesnt understand you? Read this 10-minute script and well improve its voice recognition.) Or they could spend money to plug the gaps in their own datasets. Google, for example, gets third-party companies to pay Redditors with accents to record their own voice samples.

White acknowledges that the Common Voice project doesnt have an answer to a lot of these questions, but says Mozilla is still dedicated to the core cause of open data. It feels like a true democratizing activity, he says. And there are plenty of organizations that share this ethos. Theres machine learning community Kaggle, which has a large store of user-contributed datasets for AI scientists to play with; the Elon Musk-funded OpenAI, which open-sources all its work; and Healthcare.ai, which publishes free-to-use medical algorithms. And some of these manage to both share open-source data and research while selling their own commercial products, like self-driving car startup Comma.AI.

Although the AI systems we interact with on a daily basis are built on proprietary data, theres a whole world of researchers and institutions publishing useful, if rudimentary, open-source alternatives.

To take these projects to the next level, though, proponents of open-source data may have enlist higher powers to take on the tech giants. Chris Nicholson, CEO of deep learning company Skymind, says, We may need third parties to step in NGOs, governments, coalitions of smaller private firms and pool their data. Nicholson suggests that sharing health care data can improve medical imaging technology, and driver data can make autonomous cars more natural and intuitive on the road. Sharing these types of datasets, he says, has obvious public benefits.

Donating your voice, then, may just be the beginning.

Go here to read the rest:

Mozilla is crowdsourcing voice recognition to make AI work for the people - The Verge

Posted in Ai | Comments Off on Mozilla is crowdsourcing voice recognition to make AI work for the people – The Verge

Mark Zuckerberg Argues Against Elon Musk’s View of Artificial Intelligence Again – Fortune

Posted: at 10:27 am

When it comes to artificial intelligence, Mark Zuckerberg is more of a glass-half-full guy whereas Elon Musk sees the glass as half empty.

Zuckerberg, Facebooks CEO, wrote a post Tuesday evening in which he shared his optimism over the rise of AI technologies like deep learning and how they could lead to breakthroughs in areas like healthcare and self-driving cars.

Normally, this wouldnt be noteworthy, considering its pretty obvious Zuckerberg views the rise of AI through rose-tinted glasses. The CEO has made AI a big priority for his company by hiring one of the pioneers of deep learning, Yann LeCun, as its AI research chief. Zuckerberg also created a special Facebook unit whose mission is to incorporate cutting-edge AI research into its products, and his company regularly releases research papers that highlight progress Facebook is making in AI.

Get Data Sheet , Fortunes technology newsletter.

Given that the Facebook ( fb ) CEO is clearly a believer in AI, why is he going further out of his way to express enthusiasm over the technology, when his company's actions speak loudly enough?

Left unsaid by Zuckerberg were recent comments made by Elon Musk on Tuesday in which the Tesla ( tsla ) and SpaceX ( spacex ) CEO publicly called out Zuckerberg over what Musk believes is the Facebook CEOs limited understanding of AI . Zuckerberg's Tuesday comments also included a reference to a new Facebook AI paper that won an award at a "top computer vision conference," as if to point to Musk that he has more than a "limited" understanding of the tech.

Musks comments came following a recent live Facebook broadcast in which Zuckerberg criticized people who believe that AI will cause doomsday scenarios.

"I think people who are naysayers and try to drum up these doomsday scenarios I just, I don't understand it, Zuckerberg said at the time. It's really negative and in some ways I actually think it is pretty irresponsible."

Zuckerberg comments didn't specifically single out Musk, who recently caused headlines when he told members of the National Governors Association that AI is the greatest risk we face as a civilization. Musk even told the attendees a similar hypothetical situation he shared in a documentary by filmmaker Werner Herzog in which he said AI could potentially lead to wars if used unethically.

"If you were a hedge fund or private equity fund and you said, 'Well, all I want my AI to do is maximize the value of my portfolio,'" Musk said in the documentary, "then the AI could decide, the best way to do that is to short consumer stocks, go long defense stocks, and start a war."

But Zuckerberg doesnt dwell on the bad like Musk does, and by focusing on AIs negative effects, the Facebook CEO believes Musk is doing a disservice in conjuring doom-and-gloom images in peoples minds.

Many other AI experts share Zuckerberg's beliefs, as a recent Wired story on Musks comments indicates. Many of us have tried to educate him and others like him about real vs. imaginary dangers of AI, but apparently none of it has made a dent, Pedro Domingos, a University of Washington machine-learning professor told Wired.

Although Zuckerberg and Musk will likely continue trading barbs over their views on AI, the one thing they can both agree on is that the technology has become fundamental to their respective businesses.

Teslas self-driving cars, for example, wont be able to improve in their capabilities without continued advances in machine learning. Meanwhile, Facebooks various recommendation services are also incorporating AI to better predict what people want to read and watch. Whether it's good or bad that tech giants like Facebook, Google , and even Tesla are hiring some of best AI talent and hoarding people's data to improve their services depends on how you view that glass of water.

See original here:

Mark Zuckerberg Argues Against Elon Musk's View of Artificial Intelligence Again - Fortune

Posted in Ai | Comments Off on Mark Zuckerberg Argues Against Elon Musk’s View of Artificial Intelligence Again – Fortune

Elon Musk and Mark Zuckerberg Spar Over How Dangerous AI Really Is – Big Think

Posted: at 10:27 am

One way to develop a reputation as a visionary is to come up with a well-known, startlingly prescient prediction that proves true. Another way is to gain immense wealth and fame through the development of a breakthrough productsay, PayPalor twomaybe Teslaor threeSpaceXand then use your well-funded megaphone to cast prognostications so far and wide and so often that the world comes to simply accept you as someone who sees the future. Even better if you can start a public debate with other famous visionaries, say Facebooks Mark Zuckerberg, Bill Gates, and Stephen Hawking. This is what Elon Musk has just done at the U.S National Governors Association meeting in July 2017.

Elon Musk (BRENDAN SMIALOWSKI)

Musks comments about artificial intelligence (AI) were startling and alarming, beginning with his assertion that robots will do everything better than us. I have exposure to the most cutting-edge A.I., Musk said, and I think people should be really concerned by it.

His vision of the potential conflict is outright frightening: I keep sounding the alarm bell but until people see robots going down the street killing people, they dont know how to react because it seems so ethereal.

Musks pitch to the governors was partly about robots stealing jobs from humans, a concern weve covered on Big Think, and partly a Skynet scenario, with an emphasis on humanitys weak odds of prevailing in the battle on the horizon. His point? A.I. is a rare case where I think we need to be proactive in regulation [rather] than be reactive."

It was this dire tone that caused Facebooks Mark Zuckerberg to take issue with Musks position when asked about it in a Facebook Live chat. "I think people who are naysayers and try to drum up these doomsday scenariosI don't understand it," said Zuckerberg. "It's really negative, and in some ways I think it's pretty irresponsible."

Mark Zuckerberg (JUSTIN SULLIVAN)

As CEO of Facebook, Zuckerberg is as cranium-deep into AI as Musk, but has a totally different take on it. I'm really optimistic. Technology can always be used for good and bad, and you need to be careful about how you build it, and what you build, and how it's going to be used. But people are arguing for slowing down the process of building AII just find that really questionable. I have a hard time wrapping my head around that."

Musk tweeted his response.

Oh, snap.

Hes not the only one discussing this on Twitter. AI experts chimed in to denounce Musks fear-mongering as not being a constructive contribution to the a calm, reasoned discussion of AIs promises and potential hazards.

Pedro Domingos, of the University of Washington, put it most succinctly.

And lets not forget about the imperfect humans who create AI in the first place.

Its not as if Musk is the only one concerned about the long-term dangers of AIits more about his extreme way of talking about it. As Maureen Dowd noted in her March 2013 Vanity Fair piece, Some in Silicon Valley argue that Musk is interested less in saving the world than in buffing his brand, and that he is exploiting a deeply rooted conflict: the one between man and machine, and our fear that the creation will turn against us.

Be that as it may, some are not as sanguine as Zuckerberg about what awaits us down the road with AI.

Stephen Hawking, for one, has warned us to tread carefully before we bestow intelligence on machines, saying, "It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, Hawking said, couldn't compete, and would be superseded." Hes also warned, A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble.

We do already know that AI has an odd, non-human way of thinking that even its programmers are having a hard time understanding. Will machines surprise useven horrify uswith decisions no human would ever make?

Bill Gates has alsoexpressed concerns: "I am in the camp that is concerned about super intelligence," Gates wrote during aReddit Ask Me Anything session. "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."

Bill Gates (ALEX WONG)

As to how the governors group took Musks warning, theres some evidence to suggest his sheer star power may have overwhelmed some politicians. Colorado Governor John Hickenlooper, for example, told NPR, You could have heard a pin drop. A couple of times he paused and it was totally silent. I felt likeI think a lot of us felt likewe were in the presence of Alexander Graham Bell or Thomas Alva Edison ... because he looks at things in such a different perspective.

steven-pinker-on-artificial-intelligence-apocalypse

More:

Elon Musk and Mark Zuckerberg Spar Over How Dangerous AI Really Is - Big Think

Posted in Ai | Comments Off on Elon Musk and Mark Zuckerberg Spar Over How Dangerous AI Really Is – Big Think

AI winter – Wikipedia

Posted: July 26, 2017 at 4:18 pm

In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research.[1] The term was coined by analogy to the idea of a nuclear winter. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later.

The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research. At the meeting, Roger Schank and Marvin Minskytwo leading AI researchers who had survived the "winter" of the 1970swarned the business community that enthusiasm for AI had spiraled out of control in the '80s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse.

Hypes are common in many emerging technologies, such as the railway mania or the dot-com bubble. The AI winter is primarily a collapse in the perception of AI by government bureaucrats and venture capitalists. Despite the rise and fall of AI's reputation, it has continued to develop new and successful technologies. AI researcher Rodney Brooks would complain in 2002 that "there's this stupid myth out there that AI has failed, but AI is around you every second of the day." In 2005, Ray Kurzweil agreed: "Many observers still think that the AI winter was the end of the story and that nothing since has come of the AI field. Yet today many thousands of AI applications are deeply embedded in the infrastructure of every industry."

Enthusiasm and optimism about AI has gradually increased since its low point in 1990, and by the 2010s artificial intelligence (and especially the sub-field of machine learning) became widely used, well-funded and many in the technology predict that it will soon succeed in creating machines with artificial general intelligence. As Ray Kurzweil writes: "the AI winter is long since over."

There were two major winters in 197480 and 198793[6] and several smaller episodes, including:

During the Cold War, the US government was particularly interested in the automatic, instant translation of Russian documents and scientific reports. The government aggressively supported efforts at machine translation starting in 1954. At the outset, the researchers were optimistic. Noam Chomsky's new work in grammar was streamlining the translation process and there were "many predictions of imminent 'breakthroughs'".[7]

However, researchers had underestimated the profound difficulty of word-sense disambiguation. In order to translate a sentence, a machine needed to have some idea what the sentence was about, otherwise it made mistakes. An anecdotal example was "the spirit is willing but the flesh is weak." Translated back and forth with Russian, it became "the vodka is good but the meat is rotten." Similarly, "out of sight, out of mind" became "blind idiot". Later researchers would call this the commonsense knowledge problem.

By 1964, the National Research Council had become concerned about the lack of progress and formed the Automatic Language Processing Advisory Committee (ALPAC) to look into the problem. They concluded, in a famous 1966 report, that machine translation was more expensive, less accurate and slower than human translation. After spending some 20 million dollars, the NRC ended all support. Careers were destroyed and research ended.[7]

Machine translation is still an open research problem in the 21st century, which has been met with some success (Google Translate, Yahoo Babel Fish).

Some of the earliest work in AI used networks or circuits of connected units to simulate intelligent behavior. Examples of this kind of work, called "connectionism", include Walter Pitts and Warren McCullough's first description of a neural network for logic and Marvin Minsky's work on the SNARC system. In the late '50s, most of these approaches were abandoned when researchers began to explore symbolic reasoning as the essence of intelligence, following the success of programs like the Logic Theorist and the General Problem Solver.[9]

However, one type of connectionist work continued: the study of perceptrons, invented by Frank Rosenblatt, who kept the field alive with his salesmanship and the sheer force of his personality.[10] He optimistically predicted that the perceptron "may eventually be able to learn, make decisions, and translate languages".[11] Mainstream research into perceptrons came to an abrupt end in 1969, when Marvin Minsky and Seymour Papert published the book Perceptrons, which was perceived as outlining the limits of what perceptrons could do.

Connectionist approaches were abandoned for the next decade or so. While important work, such as Paul Werbos' discovery of backpropagation, continued in a limited way, major funding for connectionist projects was difficult to find in the 1970s and early '80s.[12] The "winter" of connectionist research came to an end in the middle '80s, when the work of John Hopfield, David Rumelhart and others revived large scale interest in neural networks.[13] Rosenblatt did not live to see this, however, as he died in a boating accident shortly after Perceptrons was published.[11]

In 1973, professor Sir James Lighthill was asked by the UK Parliament to evaluate the state of AI research in the United Kingdom. His report, now called the Lighthill report, criticized the utter failure of AI to achieve its "grandiose objectives." He concluded that nothing being done in AI couldn't be done in other sciences. He specifically mentioned the problem of "combinatorial explosion" or "intractability", which implied that many of AI's most successful algorithms would grind to a halt on real world problems and were only suitable for solving "toy" versions.[14]

The report was contested in a debate broadcast in the BBC "Controversy" series in 1973. The debate "The general purpose robot is a mirage" from the Royal Institution was Lighthill versus the team of Donald Michie, John McCarthy and Richard Gregory.[15] McCarthy later wrote that "the combinatorial explosion problem has been recognized in AI from the beginning."[16]

The report led to the complete dismantling of AI research in England.[14] AI research continued in only a few top universities (Edinburgh, Essex and Sussex). This "created a bow-wave effect that led to funding cuts across Europe," writes James Hendler.[17] Research would not revive on a large scale until 1983, when Alvey (a research project of the British Government) began to fund AI again from a war chest of 350 million in response to the Japanese Fifth Generation Project (see below). Alvey had a number of UK-only requirements which did not sit well internationally, especially with US partners, and lost Phase 2 funding.

During the 1960s, the Defense Advanced Research Projects Agency (then known as "ARPA", now known as "DARPA") provided millions of dollars for AI research with almost no strings attached. DARPA's director in those years, J. C. R. Licklider believed in "funding people, not projects"[18] and allowed AI's leaders (such as Marvin Minsky, John McCarthy, Herbert A. Simon or Allen Newell) to spend it almost any way they liked.

This attitude changed after the passage of Mansfield Amendment in 1969, which required DARPA to fund "mission-oriented direct research, rather than basic undirected research."[19] Pure undirected research of the kind that had gone on in the '60s would no longer be funded by DARPA. Researchers now had to show that their work would soon produce some useful military technology. AI research proposals were held to a very high standard. The situation was not helped when the Lighthill report and DARPA's own study (the American Study Group) suggested that most AI research was unlikely to produce anything truly useful in the foreseeable future. DARPA's money was directed at specific projects with identifiable goals, such as autonomous tanks and battle management systems. By 1974, funding for AI projects was hard to find.[19]

AI researcher Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues: "Many researchers were caught up in a web of increasing exaggeration. Their initial promises to DARPA had been much too optimistic. Of course, what they delivered stopped considerably short of that. But they felt they couldn't in their next proposal promise less than in the first one, so they promised more."[20] The result, Moravec claims, is that some of the staff at DARPA had lost patience with AI research. "It was literally phrased at DARPA that 'some of these people were going to be taught a lesson [by] having their two-million-dollar-a-year contracts cut to almost nothing!'" Moravec told Daniel Crevier.[21]

While the autonomous tank project was a failure, the battle management system (the Dynamic Analysis and Replanning Tool) proved to be enormously successful, saving billions in the first Gulf War, repaying all of DARPAs investment in AI[22] and justifying DARPA's pragmatic policy.[23]

DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at Carnegie Mellon University. DARPA had hoped for, and felt it had been promised, a system that could respond to voice commands from a pilot. The SUR team had developed a system which could recognize spoken English, but only if the words were spoken in a particular order. DARPA felt it had been duped and, in 1974, they cancelled a three million dollar a year grant.[24]

Many years later, successful commercial speech recognition systems would use the technology developed by the Carnegie Mellon team (such as hidden Markov models) and the market for speech recognition systems would reach $4 billion by 2001.[25]

In the 1980s, a form of AI program called an "expert system" was adopted by corporations around the world. The first commercial expert system was XCON, developed at Carnegie Mellon for Digital Equipment Corporation, and it was an enormous success: it was estimated to have saved the company 40 million dollars over just six years of operation. Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including software companies like Teknowledge and Intellicorp (KEE), and hardware companies like Symbolics and Lisp Machines Inc. who built specialized computers, called Lisp machines, that were optimized to process the programming language Lisp, the preferred language for AI.[26]

In 1987, three years after Minsky and Schank's prediction, the market for specialized AI hardware collapsed. Workstations by companies like Sun Microsystems offered a powerful alternative to LISP machines and companies like Lucid offered a LISP environment for this new class of workstations. The performance of these general workstations became an increasingly difficult challenge for LISP Machines. Companies like Lucid and Franz Lisp offered increasingly more powerful versions of LISP. For example, benchmarks were published showing workstations maintaining a performance advantage over LISP machines.[27] Later desktop computers built by Apple and IBM would also offer a simpler and more popular architecture to run LISP applications on. By 1987 they had become more powerful than the more expensive Lisp machines. The desktop computers had rule-based engines such as CLIPS available.[28] These alternatives left consumers with no reason to buy an expensive machine specialized for running LISP. An entire industry worth half a billion dollars was replaced in a single year.[29]

Commercially, many Lisp companies failed, like Symbolics, Lisp Machines Inc., Lucid Inc., etc. Other companies, like Texas Instruments and Xerox abandoned the field. However, a number of customer companies (that is, companies using systems written in Lisp and developed on Lisp machine platforms) continued to maintain systems. In some cases, this maintenance involved the assumption of the resulting support work.

By the early 90s, the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier in research in nonmonotonic logic. Expert systems proved useful, but only in a few special contexts.[1][30] Another problem dealt with the computational hardness of truth maintenance efforts for general knowledge. KEE used an assumption-based approach (see NASA, TEXSYS) supporting multiple-world scenarios that was difficult to understand and apply.

The few remaining expert system shell companies were eventually forced to downsize and search for new markets and software paradigms, like case based reasoning or universal database access. The maturation of Common Lisp saved many systems such as ICAD which found application in knowledge-based engineering. Other systems, such as Intellicorp's KEE, moved from Lisp to a C++ (variant) on the PC and helped establish object-oriented technology (including providing major support for the development of UML).

In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings. By 1991, the impressive list of goals penned in 1981 had not been met. Indeed, some of them had not been met in 2001, or 2011. As with other AI projects, expectations had run much higher than what was actually possible.[31]

In 1983, in response to the fifth generation project, DARPA again began to fund AI research through the Strategic Computing Initiative. As originally proposed the project would begin with practical, achievable goals, which even included artificial general intelligence as long term objective. The program was under the direction of the Information Processing Technology Office (IPTO) and was also directed at supercomputing and microelectronics. By 1985 it had spent $100 million and 92 projects were underway at 60 institutions, half in industry, half in universities and government labs. AI research was generously funded by the SCI.[32]

Jack Schwarz, who ascended to the leadership of IPTO in 1987, dismissed expert systems as "clever programming" and cut funding to AI "deeply and brutally," "eviscerating" SCI. Schwarz felt that DARPA should focus its funding only on those technologies which showed the most promise, in his words, DARPA should "surf", rather than "dog paddle", and he felt strongly AI was not "the next wave". Insiders in the program cited problems in communication, organization and integration. A few projects survived the funding cuts, including pilot's assistant and an autonomous land vehicle (which were never delivered) and the DART battle management system, which (as noted above) was successful.[33]

A survey of reports from the mid-2000s suggests that AI's reputation was still less than stellar:

Many researchers in AI in the mid 2000s deliberately called their work by other names, such as informatics, machine learning, analytics, knowledge-based systems, business rules management, cognitive systems, intelligent systems, intelligent agents or computational intelligence, to indicate that their work emphasizes particular tools or is directed at a particular sub-problem. Although this may be partly because they consider their field to be fundamentally different from AI, it is also true that the new names help to procure funding by avoiding the stigma of false promises attached to the name "artificial intelligence."[36]

"Many observers still think that the AI winter was the end of the story and that nothing since come of the AI field," wrote Ray Kurzweil in 2005, "yet today many thousands of AI applications are deeply embedded in the infrastructure of every industry." In the late '90s and early 21st century, AI technology became widely used as elements of larger systems,[37] but the field is rarely credited for these successes. In 2006, Nick Bostrom explained that "a lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."[38] Rodney Brooks stated around the same time that "there's this stupid myth out there that AI has failed, but AI is around you every second of the day."

Technologies developed by AI researchers have achieved commercial success in a number of domains, such as machine translation, data mining, industrial robotics, logistics,[39] speech recognition,[40] banking software,[41] medical diagnosis[41] and Google's search engine.[42]

Fuzzy logic controllers have been developed for automatic gearboxes in automobiles (the 2006 Audi TT, VW Touareg[43] and VW Caravell feature the DSP transmission which utilizes fuzzy logic, a number of koda variants (koda Fabia) also currently include a fuzzy logic-based controller). Camera sensors widely utilize fuzzy logic to enable focus.

Heuristic search and data analytics are both technologies that have developed from the evolutionary computing and machine learning subdivision of the AI research community. Again, these techniques have been applied to a wide range of real world problems with considerable commercial success.

In the case of Heuristic Search, ILOG has developed a large number of applications including deriving job shop schedules for many manufacturing installations.[44] Many telecommunications companies also make use of this technology in the management of their workforces, for example BT Group has deployed heuristic search[45] in a scheduling application that provides the work schedules of 20,000 engineers.

Data analytics technology utilizing algorithms for the automated formation of classifiers that were developed in the supervised machine learning community in the 1990s (for example, TDIDT, Support Vector Machines, Neural Nets, IBL) are now[when?] used pervasively by companies for marketing survey targeting and discovery of trends and features in data sets.

Primarily the way researchers and economists judge the status of an AI winter is by reviewing which AI projects are being funded, how much and by whom. Trends in funding are often set by major funding agencies in the developed world. Currently, DARPA and a civilian funding program called EU-FP7 provide much of the funding for AI research in the US and European Union.

As of 2007, DARPA was soliciting AI research proposals under a number of programs including The Grand Challenge Program, Cognitive Technology Threat Warning System (CT2WS), "Human Assisted Neural Devices (SN07-43)", "Autonomous Real-Time Ground Ubiquitous Surveillance-Imaging System (ARGUS-IS)" and "Urban Reasoning and Geospatial Exploitation Technology (URGENT)"

Perhaps best known is DARPA's Grand Challenge Program[46] which has developed fully automated road vehicles that can successfully navigate real world terrain[47] in a fully autonomous fashion.

DARPA has also supported programs on the Semantic Web with a great deal of emphasis on intelligent management of content and automated understanding. However James Hendler, the manager of the DARPA program at the time, expressed some disappointment with the government's ability to create rapid change, and moved to working with the World Wide Web Consortium to transition the technologies to the private sector.

The EU-FP7 funding program provides financial support to researchers within the European Union. In 2007/2008, it was funding AI research under the Cognitive Systems: Interaction and Robotics Programme (193m), the Digital Libraries and Content Programme (203m) and the FET programme (185m).[48]

Concerns are sometimes raised that a new AI winter could be triggered by any overly ambitious or unrealistic promise by prominent AI scientists. For example, some researchers feared that the widely publicized promises in the early 1990s that Cog would show the intelligence of a human two-year-old might lead to an AI winter.

James Hendler in 2008, observed that AI funding both in the EU and the US were being channeled more into applications and cross-breeding with traditional sciences, such as bioinformatics.[28] This shift away from basic research is happening at the same time as there's a drive towards applications of e.g. the semantic web. Invoking the pipeline argument, (see underlying causes) Hendler saw a parallel with the '80s winter and warned of a coming AI winter in the '10s.

There are also constant reports that another AI spring is imminent or has already occurred:

Several explanations have been put forth for the cause of AI winters in general. As AI progressed from government funded applications to commercial ones, new dynamics came into play. While hype is the most commonly cited cause, the explanations are not necessarily mutually exclusive.

The AI winters can[citation needed] be partly understood as a sequence of over-inflated expectations and subsequent crash seen in stock-markets and exemplified[citation needed] by the railway mania and dotcom bubble. In a common pattern in development of new technology (known as hype cycle), an event, typically a technological breakthrough, creates publicity which feeds on itself to create a "peak of inflated expectations" followed by a "trough of disillusionment". Since scientific and technological progress can't keep pace with the publicity-fueled increase in expectations among investors and other stakeholders, a crash must follow. AI technology seems to be no exception to this rule.[citation needed]

Another factor is AI's place in the organisation of universities. Research on AI often takes the form of interdisciplinary research. One example is the Master of Artificial Intelligence[53] program at K.U. Leuven which involve lecturers from Philosophy to Mechanical Engineering. AI is therefore prone to the same problems other types of interdisciplinary research face. Funding is channeled through the established departments and during budget cuts, there will be a tendency to shield the "core contents" of each department, at the expense of interdisciplinary and less traditional research projects.

Downturns in a country's national economy cause budget cuts in universities. The "core contents" tendency worsen the effect on AI research and investors in the market are likely to put their money into less risky ventures during a crisis. Together this may amplify an economic downturn into an AI winter. It is worth noting that the Lighthill report came at a time of economic crisis in the UK,[54] when universities had to make cuts and the question was only which programs should go.

Early in the computing history the potential for neural networks was understood but it has never been realized. Fairly simple networks require significant computing capacity even by today's standards.

It is common to see the relationship between basic research and technology as a pipeline. Advances in basic research give birth to advances in applied research, which in turn leads to new commercial applications. From this it is often argued that a lack of basic research will lead to a drop in marketable technology some years down the line. This view was advanced by James Hendler in 2008,[28] claiming that the fall of expert systems in the late '80s were not due to an inherent and unavoidable brittleness of expert systems, but to funding cuts in basic research in the '70s. These expert systems advanced in the '80s through applied research and product development, but by the end of the decade, the pipeline had run dry and expert systems were unable to produce improvements that could have overcome the brittleness and secured further funding.

The fall of the Lisp machine market and the failure of the fifth generation computers were cases of expensive advanced products being overtaken by simpler and cheaper alternatives. This fits the definition of a low-end disruptive technology, with the Lisp machine makers being marginalized. Expert systems were carried over to the new desktop computers by for instance CLIPS, so the fall of the Lisp machine market and the fall of expert systems are strictly speaking two separate events. Still, the failure to adapt to such a change in the outside computing milieu is cited as one reason for the 1980s AI winter.[28]

Several philosophers, cognitive scientists and computer scientists have speculated on where AI might have failed and what lies in its future. Hubert Dreyfus highlighted flawed assumptions of AI research in the past and, as early as 1966, correctly predicted that the first wave of AI research would fail to fulfill the very public promises it was making. Others critics like Noam Chomsky have argued that AI is headed in the wrong direction, in part because of its heavy reliance on statistical techniques.[55] Chomsky's comments fit into a larger debate with Peter Norvig, centered around the role of statistical methods in AI. The exchange between the two started with comments made by Chomsky at a symposium at MIT[56] to which Norvig wrote a response.[57]

Visit link:

AI winter - Wikipedia

Posted in Ai | Comments Off on AI winter – Wikipedia

The data that transformed AI researchand possibly the world – Quartz

Posted: at 4:18 pm

In 2006, Fei-Fei Li started ruminating on an idea.

Li, a newly-minted computer science professor at University of Illinois Urbana-Champaign, saw her colleagues across academia and the AI industry hammering away at the same concept: a better algorithm would make better decisions, regardless of the data.

But she realized a limitation to this approachthe best algorithm wouldnt work well if the data it learned from didnt reflect the real world.

Her solution: build a better dataset.

We decided we wanted to do something that was completely historically unprecedented, Li said, referring to a small team who would initially work with her. Were going to map out the entire world of objects.

The resulting dataset was called ImageNet. Originally published in 2009 as a research poster stuck in the corner of a Miami Beach conference center, the dataset quickly evolved into an annual competition to see which algorithms could identify objects in the datasets images with the lowest error rate. Many see it as the catalyst for the AI boom the world is experiencing today.

Alumni of the ImageNet challenge can be found in every corner of the tech world. The contests first winners in 2010 went on to take senior roles at Baidu, Google, and Huawei. Matthew Zeiler built Clarifai based off his 2013 ImageNet win, and is now backed by $40 million in VC funding. In 2014, Google split the winning title with two researchers from Oxford, who were quickly snapped up and added to its recently-acquired DeepMind lab.

Li herself is now chief scientist at Google Cloud, a professor at Stanford, and director of the universitys AI lab.

Today, shell take the stage at CVPR to talk about ImageNets annual results for the last time2017 was the final year of the competition. In just seven years, the winning accuracy in classifying objects in the dataset rose from 71.8% to 97.3%, surpassing human abilities and effectively proving that bigger data leads to better decisions.

Even as the competition ends, its legacy is already taking shape. Since 2009, dozens of new AI research datasets have been introduced in subfields like computer vision, natural language processing, and voice recognition.

The paradigm shift of the ImageNet thinking is that while a lot of people are paying attention to models, lets pay attention to data, Li said. Data will redefine how we think about models.

In the late 1980s, Princeton psychologist George Miller started a project called WordNet, with the aim of building a hierarchal structure for the English language. It would be sort of like a dictionary, but words would be shown in relation to other words rather than alphabetical order. For example, within WordNet, the word dog would be nested under canine, which would be nested under mammal, and so on. It was a way to organize language that relied on machine-readable logic, and amassed more than 155,000 indexed words.

Li, in her first teaching job at UIUC, had been grappling with one of the core tensions in machine learning: overfitting and generalization. When an algorithm can only work with data thats close to what its seen before, the model is considered overfitting to the data; it cant understand anything more general past those examples. On the other hand, if a model doesnt pick up the right patterns between the data, its overgeneralizing.

Finding the perfect algorithm seemed distant, Li says. She saw that previous datasets didnt capture how variable the world could beeven just identifying pictures of cats is infinitely complex. But by giving the algorithms more examples of how complex the world could be, it made mathematic sense they could fare better. If you only saw five pictures of cats, youd only have five camera angles, lighting conditions, and maybe variety of cat. But if youve seen 500 pictures of cats, there are many more examples to draw commonalities from.

Li started to read about how others had attempted to catalogue a fair representation of the world with data. During that search, she found WordNet.

Having read about WordNets approach, Li met with professor Christiane Fellbaum, a researcher influential in the continued work on WordNet, during a 2006 visit to Princeton. Fellbaum had the idea that WordNet could have an image associated with each of the words, more as a reference rather than a computer vision dataset. Coming from that meeting, Li imagined something grandera large-scale dataset with many examples of each word.

Months later Li joined the Princeton faculty, her alma mater, and started on the ImageNet project in early 2007. She started to build a team to help with the challenge, first recruiting a fellow professor, Kai Li, who then convinced Ph.D student Jia Deng to transfer into Lis lab. Deng has helped run the ImageNet project through 2017.

It was clear to me that this was something that was very different from what other people were doing, were focused on at the time, Deng said. I had a clear idea that this would change how the game was played in vision research, but I didnt know how it would change.

The objects in the dataset would range from concrete objects, like pandas or churches, to abstract ideas like love.

Lis first idea was to hire undergraduate students for $10 an hour to manually find images and add them to the dataset. But back-of-the-napkin math quickly made Li realize that at the undergrads rate of collecting images it would take 90 years to complete.

After the undergrad task force was disbanded, Li and the team went back to the drawing board. What if computer-vision algorithms could pick the photos from the internet, and humans would then just curate the images? But after a few months of tinkering with algorithms, the team came to the conclusion that this technique wasnt sustainable eitherfuture algorithms would be constricted to only judging what algorithms were capable of recognizing at the time the dataset was compiled.

Undergrads were time-consuming, algorithms were flawed, and the team didnt have moneyLi said the project failed to win any of the federal grants she applied for, receiving comments on proposals that it was shameful Princeton would research this topic, and that the only strength of proposal was that Li was a woman.

A solution finally surfaced in a chance hallway conversation with a graduate student who asked Li whether she had heard of Amazon Mechanical Turk, a service where hordes of humans sitting at computers around the world would complete small online tasks for pennies.

He showed me the website, and I can tell you literally that day I knew the ImageNet project was going to happen, she said. Suddenly we found a tool that could scale, that we could not possibly dream of by hiring Princeton undergrads.

Mechanical Turk brought its own slew of hurdles, with much of the work fielded by two of Lis Ph.D students, Jia Deng and Olga Russakovsky . For example, how many Turkers needed to look at each image? Maybe two people could determine that a cat was a cat, but an image of a miniature husky might require 10 rounds of validation. What if some Turkers tried to game or cheat the system? Lis team ended up creating a batch of statistical models for Turkers behaviors to help ensure the dataset only included correct images.

Even after finding Mechanical Turk, the dataset took two and a half years to complete. It consisted of 3.2 million labelled images, separated into 5,247 categories, sorted into 12 subtrees like mammal, vehicle, and furniture.

In 2009, Li and her team published the ImageNet paper with the datasetto little fanfare. Li recalls that CVPR, a leading conference in computer vision research, only allowed a poster, instead of an oral presentation, and the team handed out ImageNet-branded pens to drum up interest. People were skeptical of the basic idea that more data would help them develop better algorithms.

There were comments like If you cant even do one object well, why would you do thousands, or tens of thousands of objects? Deng said.

If data is the new oil, it was still dinosaur bones in 2009.

Later in 2009, at a computer vision conference in Kyoto, a researcher named Alex Berg approached Li to suggest that adding an additional aspect to the contest where algorithms would also have to locate where the pictured object was, not just that it existed. Li countered: Come work with me.

Li, Berg, and Deng authored five papers together based on the dataset, exploring how algorithms would interpret such vast amounts of data. The first paper would become a benchmark for how an algorithm would react to thousands of classes of images, the predecessor to the ImageNet competition.

We realized to democratize this idea we needed to reach out further, Li said, speaking on the first paper.

Li then approached a well-known image recognition competition in Europe called PASCAL VOC, which agreed to collaborate and co-brand their competition with ImageNet. The PASCAL challenge was a well-respected competition and dataset, but representative of the previous method of thinking. The competition only had 20 classes, compared to ImageNets 1,000.

As the competition continued in 2011 and into 2012, it soon became a benchmark for how well image classification algorithms fared against the most complex visual dataset assembled at the time.

But researchers also began to notice something more going on than just a competitiontheir algorithms worked better when they trained using the ImageNet dataset.

The nice surprise was that people who trained their models on ImageNet could use them to jumpstart models for other recognition tasks. Youd start with the ImageNet model and then youd fine-tune it for another task, said Berg. That was a breakthrough both for neural nets and just for recognition in general.

Two years after the first ImageNet competition, in 2012, something even bigger happened. Indeed, if the artificial intelligence boom we see today could be attributed to a single event, it would be the announcement of the 2012 ImageNet challenge results.

Geoffrey Hinton, Ilya Sutskever, and Alex Krizhevsky from the University of Toronto submitted a deep convolutional neural network architecture called AlexNetstill used in research to this daywhich beat the field by a whopping 10.8 percentage point margin, which was 41% better than the next best.

ImageNet couldnt come at a better time for Hinton and his two students. Hinton had been working on artificial neural networks since the 1980s, and while some like Yann LeCun had been able to work the technology into ATM check readers through the influence of Bell Labs, Hintons research hadnt found that kind of home. A few years earlier, research from graphics-card manufacturer Nvidia had made these networks process faster, but still not better than other techniques.

Hinton and his team had demonstrated that their networks could perform smaller tasks on smaller datasets, like handwriting detection, but they needed much more data to be useful in the real world.

It was so clear that if you do a really good on ImageNet, you could solve image recognition, said Sutskever.

Today, these convolutional neural networks are everywhereFacebook, where LeCun is director of AI research, uses them to tag your photos; self-driving cars are using them to detect objects; basically anything that knows whats in a image or video uses them. They can tell whats in an image by finding patterns between pixels on ascending levels of abstraction, using thousands to millions of tiny computations on each level. New images are put through the process to match their patterns to learned patterns. Hinton had been pushing his colleagues to take them seriously for decades, but now he had proof that they could beat other state of the art techniques.

Whats more amazing is that people were able to keep improving it with deep learning, Sutskever said, referring to the method that layers neural networks to allow more complex patterns to be processed, now the most popular favor of artificial intelligence. Deep learning is just the right stuff.

The 2012 ImageNet results sent computer vision researchers scrambling to replicate the process. Matthew Zeiler, an NYU Ph.D student who had studied under Hinton, found out about the ImageNet results and, through the University of Toronto connection, got early access to the paper and code. He started working with Rob Fergus, a NYU professor who had also built a career working on neural networks. The two started to develop their submission for the 2013 challenge, and Zeiler eventually left a Google internship weeks early to focus on the submission.

Zeiler and Fergus won that year, and by 2014 all the high-scoring competitors would be deep neural networks, Li said.

This Imagenet 2012 event was definitely what triggered the big explosion of AI today, Zeiler wrote in an email to Quartz. There were definitely some very promising results in speech recognition shortly before this (again many of them sparked by Toronto), but they didnt take off publicly as much as that ImageNet win did in 2012 and the following years.

Today, many consider ImageNet solvedthe error rate is incredibly low at around 2%. But thats for classification, or identifying which object is in an image. This doesnt mean an algorithm knows the properties of that object, where it comes from, what its used for, who made it, or how it interacts with its surroundings. In short, it doesnt actually understand what its seeing. This is mirrored in speech recognition, and even in much of natural language processing. While our AI today is fantastic at knowing what things are, understanding these objects in the context of the world is next. How AI researchers will get there is still unclear.

While the competition is ending, the ImageNet datasetupdated over the years and now more than 13 million images strongwill live on.

Berg says the team tried to retire the one aspect of the challenge in 2014, but faced pushback from companies including Google and Facebook who liked the centralized benchmark. The industry could point to one number and say, Were this good.

Since 2010 there have been a number of other high-profile datasets introduced by Google, Microsoft, and the Canadian Institute for Advanced Research, as deep learning has proven to require data as vast as what ImageNet provided.

Datasets have become haute. Startup founders and venture capitalists will write Medium posts shouting out the latest datasets, and how their algorithms fared on ImageNet. Internet companies such as Google, Facebook, and Amazon have started creating their own internal datasets, based on the millions of images, voice clips, and text snippets entered and shared on their platforms every day. Even startups are beginning to assemble their own datasetsTwentyBN, an AI company focused on video understanding, used Amazon Mechanical Turk to collect videos of Turkers performing simple hand gestures and actions on video. The company has released two datasets free for academic use, each with more than 100,000 videos.

There is a lot of mushrooming and blossoming of all kinds of datasets, from videos to speech to games to everything, Li said.

Its sometimes taken for granted that these datasets, which are intensive to collect, assemble, and vet, are free. Being open and free to use is an original tenet of ImageNet that will outlive the challenge and likely even the dataset.

In 2016, Google released the Open Images database, containing 9 million images in 6,000 categories. Google recently updated the dataset to include labels for where specific objects were located in each image, a staple of the ImageNet challenge after 2014. London-based DeepMind, bought by Google and spun into its own Alphabet company, recently released its own video dataset of humans performing a variety of actions.

One thing ImageNet changed in the field of AI is suddenly people realized the thankless work of making a dataset was at the core of AI research, Li said. People really recognize the importance the dataset is front and center in the research as much as algorithms.

Correction (July 26): An earlier version of this article misspelled the name of Olga Russakovsky.

Visit link:

The data that transformed AI researchand possibly the world - Quartz

Posted in Ai | Comments Off on The data that transformed AI researchand possibly the world – Quartz

Google launches its own AI Studio to foster machine intelligence startups – TechCrunch

Posted: at 4:18 pm

A new week brings a fresh Google initiative targeting AI startups. We started the month with the announcement of Gradient Ventures, Googles on-balance sheet AI investment vehicle. Two days later we watched the finalists of Google Clouds machine learning competition pitch to a panel of top AI investors. And today, Googles Launchpad is announcing a new hands-on Studio program to feed hungry AI startups the resources they need to get off the ground and scale.

The thesis is simple not all startups are created the same. AI startups love data and struggle to get enough of it. They often have to go to market in phases, iterating as new data becomes available. And they typically have highly technical teams and a dearth of product talent. You get the picture.

The Launchpad Studio aims to address these needs head-on with specialized data sets, simulation tools and prototyping assistance. Another selling point of the Launchpad Studio is that startups accepted will have access to Google talent, including engineers, IP experts and product specialists.

Launchpad, to date, operates in 40 countries around the world, explains Roy Geva Glasberg, Googles Global Lead for Accelerator efforts. We have worked with over 10,000 startups and trained over 2,000 mentors globally.

This core mentor base will serve as a recruiting pool for mentors that will assist the Studio.Barak Hachamov, board member for Launchpad, has been traveling around the world withGlasberg to identify new mentors for the program.

The idea of a startup studio isnt new. It has been attempted a handful of times in recent years, but seems to have finally caught on withAndy Rubins Playground Global. Playground offers startups extensive services and access to top talent to dial-in products and compete with the largest of tech companies.

On the AI Studio front, Yoshua Bengios Element AI raised a $102 million Series A to create a similar program. Bengio, one of, if not the, most famous AI researchers, can help attract top machine learning talent to enable recruiting parity with top AI groups like Googles DeepMind and Facebooks FAIR. Launchpad Studio wont have Bengio, but it will bringPeter Norvig, Dan Ariely, Yossi Matias and Chris DiBona to the table.

But unlike Playgrounds $300 million accompanying venture capital arm and Elements own coffers, Launchpad Studio doesnt actually have any capital to deploy. On one hand, capital completes the package. On the other, Ive never heard a good AI startup complain about not being able to raise funding.

Launchpad Studio sits on top of the Google Developer Launchpad network. The group has been operating an accelerator with global scale for some time now. Now on its fourth class of startups, the team has had time to flesh out its vision and build relationships with experts within Google to ease startup woes.

Launchpad has positioned itself as the Google global program for startups, asserts Glasberg. It is the most scaleable tool Google has today to reach, empower, train and support startups globally.

With all the resources in the world, Googles biggest challenge with its Studio wont be vision or execution but this doesnt guarantee everything will be smooth sailing. Between GV, Capital G, Gradient Ventures, GCP and Studio, entrepreneurs are going to have a lot of potential touch-points with the company.

On paper, Launchpad Studio is the Switzerland of Googles programs. It doesnt aim to make money or strengthen Google Clouds positioning. But from the perspective of founders, theres bound to be some confusion. In an ideal world we will see a meeting of the minds between Launchpads Glasberg, Gradients Anna Patterson and GCPs Sam OKeefe.

The Launchpad Studio will be based in San Francisco, with additional operations in Tel Aviv and New York City. Eventually Toronto, London, Bangalore and Singapore will host events locally for AI founders.

Applications to the Studio are now open if youre interested you can apply here.The program itself is stage-agnostic, so there are no restrictions on size. Ideally early and later-stage startups can learn from each other as they scale machine learning models to larger audiences.

See the original post here:

Google launches its own AI Studio to foster machine intelligence startups - TechCrunch

Posted in Ai | Comments Off on Google launches its own AI Studio to foster machine intelligence startups – TechCrunch

AI Grant aims to fund the unfundable to advance AI and solve hard … – TechCrunch

Posted: at 4:18 pm

Artificial intelligence-focused investment funds are a dime a dozen these days. Everyone knows theres money to be made from AI, but to capture value, good VCs know they need to back products and not technologies. This has left a bit of a void in the space where research occurs within research institutions and large tech companies and commercialization occurs within verticalized startups there isnt much left for the DIY AI enthusiast. AI Grant, created by Nat Friedman and Daniel Gross, aims to bankroll science projects for the heck of it to give untraditional candidates a shot at solving big problems.

Gross, a partner at Y Combinator, and Friedman, a founder who grewXamarin to acquisition by Microsoft, started working on AI Grant back in April. AI Grant issues no-strings-attached grants to people passionate about interesting AI problems. The more formalized version launching today brings a slate of corporate partners and a more structured application review process.

Anyone, regardless of background, can submit an application for a grant. The application is online and consists of questions about background and prior projects in addition to basic information about what the money will be used for and what the initial steps will be for the project. Applicants are asked to connect their GitHub, LinkedIn, Facebook and Twitter accounts.

Gross told me in an interview that the goal is to build profiles of non-traditional machine learning engineers. Eventually, the data collected from the grant program could allow the two to play a bit of machine learning moneyball valuing machine learning engineers without traditional metrics (like having a PhD from Stanford). You can imagine how all the social data could even help build a model for ideal grant recipients in the future.

The long-term goal is to create a decentralized AI research lab think DeepMind but run through Slack and full of engineers that dont cost $300,000 a pop. One day, the MacArthur genius grant-inspired program could serve other industries outside of AI offering a playground of sorts for the obsessed to build, uninhibited.

The entire AI Grant project reminds me of a cross between a Thiel Fellowship and a Kaggle competition. The former, a program to give smart college dropouts money and freedom to tinker and the later, an innovative platform for evaluating data scientists through competition. Neither strive to advance the field in the way the AI Grant program does, but you can see the ideological similarity around democratizing innovation.

Some of the early proposals to receive the AI Grant include:

Charles River Ventures (CRV) is providing the $2,500 grants that will be handed out to the next 20 fellows. In addition, Google has signed on to provide $20,000 in cloud computing credits to each winner, CrowdFlower is offering $18,000 in platform credit with $5,000 in human labeling credits, Scale is giving $1,000 in human labeling credit per winner and Floyd will give 250 Tesla K80 GPU hours to each winner.

During the first selection of grant winners, Floodgate awarded $5,000 checks. The program launching today will award $2,500 checks. Gross told me that this change was intentional the initial check size was too big. The plan is to add additional flexibility in the future to allow applicants to make a case for how much money they actually need.

You can check out the application here and give it a go. Applications will be taken until August 25th. Final selection of fellows will occur on September 24th.

Read this article:

AI Grant aims to fund the unfundable to advance AI and solve hard ... - TechCrunch

Posted in Ai | Comments Off on AI Grant aims to fund the unfundable to advance AI and solve hard … – TechCrunch

Page 216«..1020..215216217218..230240..»