Page 270«..1020..269270271272..280..»

Category Archives: Ai

The future of AI is neuromorphic. Meet the scientists building digital ‘brains’ for your phone – Wired.co.uk

Posted: March 6, 2017 at 3:14 pm

Subscribe to WIRED

AI services like Apples Siri and others operate by sending your queries to faraway data centers, which send back responses. The reason they rely on cloud-based computing is that todays electronics dont come with enough computing power to run the processing-heavy algorithms needed for machine learning. The typical CPUs most smartphones use could never handle a system like Siri on the device. But Dr. Chris Eliasmith, a theoretical neuroscientist and co-CEO of Canadian AI startup Applied Brain Research, is confident that a new type of chip is about to change that.

Many have suggested Moore's law is ending and that means we won't get 'more compute' cheaper using the same methods, Eliasmith says. Hes betting on the proliferation of neuromorphics a type of computer chip that is not yet widely known but already being developed by several major chip makers. What is Moore's Law? WIRED explains the theory that defined the tech industry

Traditional CPUs process instructions based on clocked time information is transmitted at regular intervals, as if managed by a metronome. By packing in digital equivalents of neurons, neuromorphics communicate in parallel (and without the rigidity of clocked time) using spikes bursts of electric current that can be sent whenever needed. Just like our own brains, the chips neurons communicate by processing incoming flows of electricity - each neuron able to determine from the incoming spike whether to send current out to the next neuron.

What makes this a big deal is that these chips require far less power to process AI algorithms. For example, one neuromorphic chip made by IBM contains five times as many transistors as a standard Intel processor, yet consumes only 70 milliwatts of power. An Intel processor would use anywhere from 35 to 140 watts, or up to 2000 times more power.

Eliasmith points out that neuromorphics arent new and that their designs have been around since the 80s. Back then, however, the designs required specific algorithms be baked directly into the chip. That meant youd need one chip for detecting motion, and a different one for detecting sound. None of the chips acted as a general processor in the way that our own cortex does.

Subscribe to WIRED

This was partly because there hasnt been any way for programmers to design algorithms that can do much with a general purpose chip. So even as these brain-like chips were being developed, building algorithms for them has remained a challenge.

Eliasmith and his team are keenly focused on building tools that would allow a community of programmers to deploy AI algorithms on these new cortical chips.

Central to these efforts is Nengo, a compiler that developers can use to build their own algorithms for AI applications that will operate on general purpose neuromorphic hardware. Compilers are a software tool that programmers use to write code, and that translate that code into the complex instructions that get hardware to actually do something. What makes Nengo useful is its use of the familiar Python programming language known for its intuitive syntax and its ability to put the algorithms on many different hardware platforms, including neuromorphic chips. Pretty soon, anyone with an understanding of Python could be building sophisticated neural nets made for neuromorphic hardware.

Things like vision systems, speech systems, motion control, and adaptive robotic controllers have already been built with Nengo, Peter Suma, a trained computer scientist and the other CEO of Applied Brain Research, tells me.

Perhaps the most impressive system built using the compiler is Spaun, a project that in 2012 earned international praise for being the most complex brain model ever simulated on a computer. Spaun demonstrated that computers could be made to interact fluidly with the environment, and perform human-like cognitive tasks like recognizing images and controlling a robot arm that writes down what its sees. The machine wasnt perfect, but it was a stunning demonstration that computers could one day blur the line between human and machine cognition. Recently, by using neuromorphics, most of Spaun has been run 9000x faster, using less energy than it would on conventional CPUs and by the end of 2017, all of Spaun will be running on Neuromorphic hardware.

Eliasmith won NSERCs John C. Polyani award for that project Canadas highest recognition for a breakthrough scientific achievement and once Suma came across the research, the pair joined forces to commercialize these tools.

While Spaun shows us a way towards one day building fluidly intelligent reasoning systems, in the nearer term neuromorphics will enable many types of context aware AIs, says Suma. Suma points out that while todays AIs like Siri remain offline until explicitly called into action, well soon have artificial agents that are always on and ever-present in our lives.

Imagine a SIRI that listens and sees all of your conversations and interactions. Youll be able to ask it for things like - "Who did I have that conversation about doing the launch for our new product in Tokyo?" or "What was that idea for my wife's birthday gift that Melissa suggested?, he says.

When I raised concerns that some company might then have an uninterrupted window into even the most intimate parts of my life, Im reminded that because the AI would be processed locally on the device, theres no need for that information to touch a server owned by a big company. And for Eliasmith, this always on component is a necessary step towards true machine cognition. The most fundamental difference between most available AI systems of today and the biological intelligent systems we are used to, is the fact that the latter always operate in real-time. Bodies and brains are built to work with the physics of the world, he says.

Already, major efforts across the IT industry are heating up to get their AI services into the hands of users. Companies like Apple, Facebook, Amazon, and even Samsung, are developing conversational assistants they hope will one day become digital helpers.

With the rise of neuromorphics, and tools like Nengo, we could soon have AIs capable of exhibiting a stunning level of natural intelligence right on our phones.

See the original post here:

The future of AI is neuromorphic. Meet the scientists building digital 'brains' for your phone - Wired.co.uk

Posted in Ai | Comments Off on The future of AI is neuromorphic. Meet the scientists building digital ‘brains’ for your phone – Wired.co.uk

Does Trump’s ‘Weaponized AI Propaganda Machine’ Hold Water? – Forbes

Posted: March 5, 2017 at 4:16 pm


Forbes
Does Trump's 'Weaponized AI Propaganda Machine' Hold Water?
Forbes
As today's fake news crisis grows, AI may be our only hope to cut through the propaganda but in the wrong hands, AI is also a powerful propaganda tool in its ...

and more »

View post:

Does Trump's 'Weaponized AI Propaganda Machine' Hold Water? - Forbes

Posted in Ai | Comments Off on Does Trump’s ‘Weaponized AI Propaganda Machine’ Hold Water? – Forbes

Investors Place Their Bet on AI-Generated Music – TOP500 News

Posted: at 4:16 pm

Amper, a startup that has offers an AI platform that composes new music, is garnering the attention of venture capitalists. Last week it attracting $4 million in additional funding, adding to a previous round of seed funding that took place last year.

The new infusion of money came was led by Two Sigma Ventures, along with some help from Foundry Group, Kiwi Venture Partners and Advancit Capital. Last October, Brooklyn Bridge Ventures invested a smaller undetermined amount, probably something between $250,000 and $500,000.

Amper was created by musical composers Drew Silverstein, Sam Estes, and Michael Hobe, whose livliehoods, at least up until recently, relied on creating compositions to sell to other musicians or media artists. But the AI platform they developed writes its own musical creations, and allows anyone to guide the process, whether or not they have a musical background. The results are surprisingly good, and at least to the untrained ear, would be hard to discern from a composition by an actual trained musician. The idea is not to create the great symphonies, but rather fairly short musical scores that can be incorporatedinto other media content. An example is provided below.

The idea, according to Ampers founders, is not to replace composers, but to make it easier for artists with limited resources, especially those involved in lower budget efforts like commercials, online videos, and band startups, get access to music at cut-rate prices. In general, its not economically feasible to contract a composer for such projects, so this is opening up a largely untapped market. Currently, Amper is providing free access to the technology, but when the business is up and running, presumably there will be some sort of reasonable user fee to contend with.

Anyone interested in tapping into your inner Beethoven can go to the website, and give Amper a spin. You basically guide the musical composition, using a variety of parameters mood & style, instrumentation, tempo and duration. Then you hit the Render button, and presto, you have a composition. Again, youre not going to win any Grammys with this, but for quick-and-dirty musical scores, its an impressive product.

Ampers not the only music-generating AI. A short list would include FlowComposer (from Flow Machines) and Jukedeck, and there are research efforts underway at Google, IBM and elsewhere. One gets the feeling that AI music composition is just getting started.

See original here:

Investors Place Their Bet on AI-Generated Music - TOP500 News

Posted in Ai | Comments Off on Investors Place Their Bet on AI-Generated Music – TOP500 News

Quora Question: Which Company is Leading the Field in AI Research? – Newsweek

Posted: March 4, 2017 at 3:16 pm

Quora Questions are part of a partnership between NewsweekandQuora, through which we'll be posting relevant and interesting answers from Quora contributors throughout the week. Read more about the partnershiphere.

Answer from Eric Jang, Research engineer at Google Brain:

Who is leading in AI research among big players like IBM, Google, Facebook, Appleand Microsoft?First, my response contains some bias, because I work at Google Brain, and I really like it there. My opinions are my own, and I do not speak for the rest of my colleagues or Alphabet as a whole.

Try Newsweek for only $1.25 per week

I rank leaders in AI research among IBM, Google, Facebook, Apple, Baidu, Microsoft as follows:

I would say Deepmind is probably #1 right now, in terms of AI research.

Their publications are highly respected within the research community, and span a myriad of topics such as deep reinforcement learning, Bayesian neural nets, robotics, transfer learningand others. Being London-based, they recruit heavily from Oxford and Cambridge, which are great ML feeder programs in Europe. They hire an intellectually diverse team to focus on general AI research, including traditional software engineers to build infrastructure and tooling, UX designers to help make research tools, and even ecologists (Drew Purves) to research far-field ideas like the relationship between ecology and intelligence.

They are second to none when it comes to PR and capturing the imagination of the public at large, such as with DQN-Atari and the history-making AlphaGo. Whenever a Deepmind paper drops, it shoots up to the top of Reddits Machine Learning page and often Hacker News, which is a testament to how well-respected they are within the tech community.

Before you roll your eyes at me putting two Alphabet companies at the top of this list, I discount this statement by also ranking Facebook and OpenAI on equal terms at #2. Scroll down if you dont want to hear me gush about Google Brain.

With all due respect to Yann LeCun (he has a pretty good answer), I think he is mistaken about Google Brains prominence in the research community.

"But much of it is focused on applications and product development rather than long-term AI research."

This is categorically false, to the max.

TensorFlow (the Brain teams primary product) is just one of many Brain subteams, and is to my knowledge the only one that builds an externally-facing product. When Brain first started, the first research projects were indeed engineering-heavy, but today, Brain has many employees that focus on long-term AI research in every AI subfield imaginable, similar to FAIR and Deepmind.

FAIR has 16 accepted publications to the ICLR 2017 conference track (announcement by Yann: Yann LeCun - FAIR has co-authors on 16 papers accepted at...), with 3 selected for orals (i.e. very distinguished publications).

Google Brain actually slightly edged out FB this year at ICLR2017, with 20accepted papers and fourselected for orals. I'm excited that the Google Brain teamwill have a decent presence at ICLR 2017.

This doesnt count publications from Deepmind or other teams doing research within Google (Search, VR, Photos). Comparing the number of accepted papers is hardly a good metric, but I want to dispel any insinuations by Yann that Brain is not a legitimate place to do deep learning research.

Google Brain is also the industry research org with the most collaborative flexibility. I dont think any other research institution in the world, industrial or otherwise, has ongoing collaborations with Berkeley, Stanford, CMU, OpenAI, Deepmind, Google X and a myriad of product teams within Google.

I believe that Brain will soon be regarded as a top tier institution in the near future. I had offers from both Brain and Deepmind, and chose the former because I felt that Brain gave me more flexibility to design my own research projects, collaborate more closely with internal Google teams, and join some really interesting robotics initiatives that I cant disclose yet.

Microsoft claims its new speech recognition software has reached parity with humans but still isn't perfect. Microsoft/ YouTube

FAIRs papers are good and my impression is that a big focus for them is language-domain problems like question answering, dynamic memory, Turing-test-type stuff. Occasionally there are some statistical-physics-meets-deep-learning papers. Obviously they do computer vision type work, as well. I wish I could say more, but I dont know enough about FAIR besides their reputation is very good.

They almost lost the deep learning framework wars with the widespread adoption of TensorFlow, but well see if Pytorch is able to successfully capture back market share.

One weakness of FAIR, in my opinion, is that its very difficult to have a research role at FAIR without a PhD. A FAIR recruiter told me this last year. Indeed, PhDs tend to be smarter, but I dont think having a PhD is necessary to bring fresh perspectives and make great contributions to science.

OpenAI has an all-star list of employees: Ilya Sutskever (all-around deep learning master), John Schulman (inventor of TRPO, master of policy gradients), Pieter Abbeel (robot sent from the future to crank out a river of robotics research papers), Andrej Karpathy (Char-RNN, CNNs), Durk Kingma (co-inventor of VAEs), Ian Goodfellow (inventor of GANs), to name a few.

Despite being a small group of around 50 people (so I guess not a Big Player by headcount or financial resources), they also have a top-notch engineering team and publish top-notch, really thoughtful research tools like Gym and Universe. Theyre adding a lot of value to the broader research community by providing software that was once locked up inside big tech companies. This has added a lot of pressure on other groups to start open-sourcing their codes and tools as well.

I almost ranked them as #1, on par with Deepmind in terms of top-research talent, but they havent really been around long enough for me to confidently assert this. They also havent pulled off an achievement comparable to AlphaGo yet, though I cant overstate how important Gym/Universe are to the research community.

As a small nonprofit research group building all their infrastructure from scratch, they dont have nearly as much GPU resources, robots, or software infrastructure as big tech companies. Having lots of compute makes a big difference in research ability and even the ideas one is able to come up with.

Startups are hard and well see whether they are able to continue attracting top talent in the coming years.

Baidu SVAIL and Baidu Institute of Deep Learning are excellent places to do research, and they are working on a lot of promising technologies like home assistants, aids for the blindand self-driving cars.

Baidu does have some reputation issues, such as recent scandals with violating ImageNet competition rules, low-quality search results leading to a Chinese student dying of cancer, and being stereotyped by Americans as a somewhat-sketchy Chinese copycat tech company complicit in authoritarian censorship.

They are definitely the strongest player in AI in China though.

Before the Deep Learning revolution, Microsoft Research used to be the most prestigious place to go. They hire very experienced faculty with many years of experience, which might explain why they sort of missed out on deep learning (the revolution in deep learning has largely been driven by PhD students).

Unfortunately, almost all deep learning research is done on Linux platforms these days, and their CNTK deep learning framework havent gotten as attention as TensorFlow, torch, Chainer, etc.

Apple is really struggling to hire deep learning talent, as researchers tend to want to publish and do research, which goes against Apples culture as a product company. This typically doesnt attract those who want to solve general AI or have their work published and acknowledged by the research community. I think Apples design roots have a lot of parallels to research, especially when it comes to audacious creativity, but the constraints of shipping an insanely great product can be a hindrance to long-term basic science.

I know a former IBM employee who worked on Watson and describes IBMs cognitive computing efforts as a total disaster, driven from management that has no idea what ML can or cannot do but sell the buzzword anyway. Watson uses deep learning for image understanding, but as I understand it the rest of the information retrieval system doesnt really leverage modern advances in deep learning. Basically there is a huge secondary market for startups to capture applied ML opportunities whenever IBM fumbles and drops the ball.

No offense to IBM researchers; youre far better scientists than I ever will be. My gripe is that the corporate culture at IBM is not conducive to leading AI research.

To be honest, all the above companies (maybe with the exception of IBM) are great places to do deep learning research, and given open source software and how prolific the entire field is nowadays, I dont think any one tech firm leads AI research by a substantial margin.

There are some places like Salesforce/Metamind, Amazonthat I heard are quite good but I dont know enough about to rank them.

My advice for a prospective deep learning researcher is to find a team/project that youre interested in, ignore what others say regarding reputation, and focus on doing your best work so that your organization becomes regarded as a leader in AI research.

Who is leading in AI research among big players like IBM, Google, Facebook, Apple, and Microsoft? originally appeared on Quorathe place to gain and share knowledge, empowering people to learn from others and better understand the world. You can follow Quora on Twitter, Facebook, and Google+. More questions:

Read more:

Quora Question: Which Company is Leading the Field in AI Research? - Newsweek

Posted in Ai | Comments Off on Quora Question: Which Company is Leading the Field in AI Research? – Newsweek

AI continued its world domination at Mobile World Congress – Engadget

Posted: at 3:16 pm

When it comes to the intersection of smartphones and AI, Motorola had the most surprising news at the show. In case you missed it, Motorola is working with Amazon (and Harman Kardon, most likely) to build a Moto Mod that will make use of Alexa. Even to me, someone who cooled on the Mods concept after an initial wave of interesting accessories slowed to a trickle, this seems like a slam dunk. Even better, Motorola product chief Dan Dery described what the company ultimately wanted to achieve: a way to get assistants like Alexa to integrate more closely with the personal data we keep on our smartphones.

In his mind, for instance, it would be ideal to ask an AI make a reservation at a restaurant mentioned in an email a day earlier. With Alexa set to be a core component of many Moto phones going forward, here's hoping Dery and the team find a way to break down the walls between AI assistants and the information that could make them truly useful. Huawei made headlines earlier this year when it committed to putting Alexa on the Mate 9, but we'll soon see if the company's integration will attempt to be as deep.

Speaking of Alexa, it's about to get some new competition in Asia. Line Inc., makers of the insanely popular messaging app of the same name, are building an assistant named Clova for smartphones and connected speakers. It will apparently be able to deal with complex questions in many forms: Development will initially focus on a first-party app, but should find its way into many different ones, giving users opportunities to talk to services that share some underlying tech.

LG got in on the AI assistant craze too, thanks to a close working relationship with Google. The LG V20 was the very first Nougat smartphone to be announced ... until Google stole the spotlight with its own Nougat-powered Pixel line. And the G6 was the first non-Pixel phone to come with Google's Assistant, a distinction that lasted for maybe a half-hour before Google said the assistant would roll out to smartphones running Android 6.0 and up. The utility is undeniable, and so far, Google Assistant on the G6 has been almost as seamless as the experience on a Pixel.

As a result, flagships like Sony's newly announced XZ Premium will likely ship with Assistant up and running as well, giving us Android fans an easier way to get things done via speech. It's worth pointing out that other flagship smartphones that weren't announced at Mobile World Congress either do or will rely on some kind of AI assistant to keep users pleased and productive. HTC's U Ultra has a second screen where suggestions and notifications generated by the HTC Companion will pop up, though the Companion isn't available on versions of the Ultra already floating around. And then there's Samsung's Galaxy S8, which is expected to come with an assistant named Bixby when it's officially unveiled in New York later this month.

While it's easy to think of "artificial intelligence" merely as software entities that can interact with us intelligently, machine-learning algorithms also fall under that umbrella. Their work might be less immediately noticeable at times, but companies are banking on the algorithmic ability to understand data that we can't on a human level and improve functionality as a result.

Take Huawei's P10, for instance. Like the flagship Mate 9 before it, the P10 benefits from a set of algorithms meant to improve performance over time by figuring out the order in which you like to do things and allocating resources accordingly. With its updated EMUI 5.1 software, the P10 is supposed to be better at managing resources like memory when the phone boots and during use -- all based on user habits. The end goal is to make phones that actually get faster over time, though it will take a while to see any real changes. (You also might never see performance improvements, since "performance" is a subjective thing anyway.)

Even Netflix showed up at Mobile World Congress to talk about machine-learning. The company is well aware that sustained growth and relevance will come as it improves the mobile-video experience. In the coming months, expect to see better-quality video using less network bandwidth, all thanks to algorithms that try quantify what it means for a video to "look good." Combine those algorithms with a new encoding scheme that compresses individual scenes in a movie or TV episode differently based on what's happening in them, and you have a highly complex fix your eyes and wallet will thank you for.

And, since MWC is just the right kind of absurd, we got an up-close look at a stunning autonomous race car called (what else?) RoboCar. Nestled within the sci-fi-inspired body are components that would've seemed like science fiction a few decades ago: There's a complex cluster of radar, LIDAR, ultrasonic and speed sensors all feeding information to an NVIDIA brain using algorithms to interpret all that information on the fly.

That these developments spanned the realms of smartphones, media and cars in a single, formerly focused trade show speak to how big a deal machine learning and artificial intelligence have become. There's no going back now -- all we can do is watch as companies make better use of the data offered to them, and hold those companies accountable when they inevitably screw up.

Click here to catch up on the latest news from MWC 2017.

Read more:

AI continued its world domination at Mobile World Congress - Engadget

Posted in Ai | Comments Off on AI continued its world domination at Mobile World Congress – Engadget

AI SciFi Short Rise Is Being Turned Into a Movie – Gizmodo

Posted: at 3:16 pm

Photo courtesy Concept Rise

Rise, the impressive robot uprising short film starring the late Anton Yelchin, is being adapted into a movie... with the original director on board to helm the production.

The five-minute film takes an updated version of the special effects from A.I. with the storyline of The Second Renaissance from The Animatrix. Its all about a dystopian future where artificially intelligent robots are hunted and killed, after the government determined they were becoming too emotional and, therefore, human. Unfortunately, its not working, as Yelchins A.I. helps trigger a war for the future of their species.

David Karlak, who directed the original short, has signed on to direct the feature-length adaptation. Its being produced by Johnny Lin (American Made) and Brian Oliver (Hacksaw Ridge, Black Swan), with original writers Patrick Melton and Marcus Dunstan returning to pen the script. No word who would replace Yelchin, who sadly passed away last year, but I am hoping Rufus Sewell (The Man in the High Castle) reprises his role as the government interrogator. Ill watch him in anything.

You can watch the original short film below.

[The Hollywood Reporter]

Continue reading here:

AI SciFi Short Rise Is Being Turned Into a Movie - Gizmodo

Posted in Ai | Comments Off on AI SciFi Short Rise Is Being Turned Into a Movie – Gizmodo

Texas Hold’em AI Bot Taps Deep Learning to Demolish Humans – IEEE Spectrum

Posted: at 3:16 pm

A fresh Texas Holdem-playing AI terrorhas emerged barely a month after a supercomputer-powered bot claimedvictory over four professional poker players. But insteadof relying ona supercomputers hardware, the DeepStack AI has shown how it too can decisively defeat human poker pros while running on a GPU chip equivalent to those found in gaming laptops.

The success of anypoker-playing computer algorithm inheads-up, no-limit Texas Holdem is no small feat. Thisversion of two-player poker with unrestricted bet sizes has 10160possible plays at different stages of the gamemore than the number of atoms in the entire universe. But the Canadian and Czech reseachers who developed the new DeepStack algorithm leveraged deep learning technology to create the computer equivalent of intuition and reduce the possible future plays that needed to be calculated at any point in the gameto just 107. That enabled DeepStacks fairly humble computer chip to figure out its best move for each playwithin five seconds and handily beat poker professionals from all over the world.

To make this practical, we only look ahead a few moves deep,saysMichael Bowling, a computer scientist and head of the Computer Poker Research Groupat the University of Alberta in Edmonton, Canada.Instead of playing from there, we useintuition to decide how to play.

This is a huge deal beyond just bragging rights for an AIs ability to beat the best human poker pros. AI that can handle complex poker games such as heads-up, no-limit Texas Holdem could alsotackle similarly complex real-world situations by making the best decisionsin the midst of uncertainty. DeepStacks poker-playing success while running on fairly standard computer hardware could make it much more practical for AI to tackle many other imperfect-information situations involving business negotiations,medical diagnoses andtreatments, or even guiding military robotson patrol. Full details of the research are published in the 2 March 2017 online issue of thejournalScience.

Imperfect-information games have represented daunting challenges for AI until recently because of the seemingly impossible computing resources requiredto crunch all the possible decisions. To avoidthe computing bottleneck, most poker-playing AI have used abstraction techniques that combine similar plays and outcomes in an attempt to reduce the number of overall calculations needed. They solved for a simplified version of heads-up, no-limit Texas Holdem instead of actually running through all the possible plays.

Such an approach has enabledAI to play complex games from a practical computing standpoint, but at the cost of having huge weaknesses in their abstracted strategies that human players can exploit. An analysis showed that four of the top AI competitors in the Annual Computer Poker Competition were beatable by more than 3,000 milli-big-blinds per game in poker parlance. That performance is four times worse than if the AI simply folded and gave up the pot at the start of every game.

DeepStack takes a very different approach that combines both old and new techniques. The older technique isanalgorithm developed by University of Alberta researchers that previously helped come up with a solution for heads-up, limit Texas Holdem (a simpler version of poker with restricted bet sizes). This counterfactual regret minimization algorithm, called CFR+ by its creators, comes up with the best possible play in a given situation by comparing different possible outcomesusing game theory.

By itself, CFR+ would stillruninto the same problem of the computing bottleneck in trying to calculate all possible plays. But DeepStack gets around this by only having the CFR+ algorithm solve for a few moves ahead instead of all possible moves until the end of the game. For all the other possible moves, DeepStack turns to its own version of intuition that is equivalent to a gut feeling about the value of the hidden cards held by both poker players. To train DeepStacks intuition, researchers turned todeep learning.

Deep learning enables AI to learn from example by filtering huge amounts of data through multiple layers of artificial neural networks. In this case, the DeepStack team trained their AI on the best solutions of the CFR+ algorithm for random poker situations. That allowed DeepStacks intuition to become a fast approximate estimate of its best solution for the rest of the game without having to actually calculate all the possible moves.

Deepstack presents the right marriage between imperfect information solvers and deep learning, Bowling says.

But the success of the deep learning componentsurprised Bowling. He thought the challenge would prove too tough even for deep learning. His colleaguesMartin Schmid and Matej Moravcikboth first authors on the DeepStack paperwere convinced that the deep learning approach would work. They ended upmakinga private bet on whether or not the approach would succeed. (I owe them a beer, Bowling says.)

DeepStack proved its poker-playing prowess in 44,852 games played against 33 poker pros recruited by the International Federation of Poker from 17 countries. Typically researchers would need to have their computer algorithms play a huge number of poker hands to ensure that the results are statistically significant and not simply due to chance. But the DeepStack team used a low-variance technique called AIVAT that filters out much of the chance factor and enabled them to come up with statistically significant results with as few as 3,000 games.

We have a history in group of doing variance reduction techniques, Bowling explains.This new technique was pioneered in our work to help separate skill and luck.

Of all the players, 11 poker pros completed the requested 3,000 games over a period of four weeks from November 7 to December 12, 2016. DeepStack handily beat 10 of the 11 with a statistically significant victory margin, and still technically beat the 11th player. DeepStacks victory as analyzed by AIVATwas 486 milli-big-blinds per game (mbb/g). Thatsquite a showing given that 50 mbb/g is considered a sizable margin of victoryamong poker pros. This victory margin also amounted to over 20 standard deviations from zero in statistical terms.

News of DeepStacks success is just the latest blow to human poker-playing egos. ACarnegie Mellon University AI called Libratus achieved its statistically significant victory against four poker pros during a marathon tournament of 120,000 games totalplayedin January 2017. That heavily publicized eventled some online poker fans to fret about the possible death of the gameat the hands of unbeatable poker bots. But to achieve victory, Libratus still calculatedits main poker-playing strategy ahead of time based on abstracted game solvinga computer- and time-intensive process that required15 million processor-core hours on a new supercomputer called Bridges.

Worried poker fans may have even greater cause for concern with the success of DeepStack.Unlike Libratus, DeepStacks remarkably effective forward-looking intuition means itdoes not have to do any extra computing beforehand. Instead, it always looks forward by solvingforactualpossible plays several moves ahead and then relies on its intuition to approximate the rest of the game.

This continual re-solving approach that can take place at any given point in a game is a step beyond the endgame solver that Libratus used only during the last betting rounds of each game. And the fact that DeepStacks approach works on the hardware equivalent of a gaming laptop could mean the world will see the rise of many more capable AI bots tackling a wide variety of challenges beyond pokerin the near future.

It does feel like a breakthrough of the sort that changes the typesof problems we can apply this to, Bowling says. Most of the work of applying this to other problems becomes whether can we get a neural network to apply this to other situations, andI think we have experience with using deep learning in a whole variety of tasks.

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, drones, automation, artificial intelligence, and more. Contact us:e.guizzo@ieee.org

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Making computers unbeatable at Texas Hold 'em could lead to big breakthroughs in artificial intelligence 25Feb2015

An AI named Libratus has beaten human pro players in no-limit Texas Hold'em for the first time 31Jan

A computer algorithm's triumph over the Texas Hold'em card game could lead to real-world security applications 8Jan2015

Howand whycomputer programs face off over the poker table 17Jul2012

Computer scientists take valuable lessons from a human vs. AI competition of no-limit Texas hold'em 13May2015

The European Parliaments draft recommendations for governing the creation and use of robots and artificial intelligence includes rights for smartrobots 22Feb

Shakey's creators and colleagues share inside stories at the celebration and talk about robotics today 17Feb

University of Michigan "micromotes" aim to make the Internet of Things smarter without consuming more power 10Feb

Ubers experiment in San Francisco showed that bicycles and bike lanes are a problem self-driving cars are struggling to crack 31Jan

The rise of deep-learning AI could enable computers to automatically count the crowds at future inauguration days 24Jan

Gill Pratt explains why nobody in the automotive industry is anywhere close to full autonomy 23Jan

Neurala wants to build powerful AI systems that run on smartphone chips to power robots, drones, and self-driving cars 17Jan

An artificial intelligence will play 120,000 hands of heads-up, no-limit Texas Hold'em against four human poker pros 10Jan

An AI alternative to deep learning makes it easier to debug the startups self-driving cars 29Dec2016

3DSignals' deep learning AI can detect early sounds of trouble in cars and other machines before they break down 27Dec2016

If we dont get a ban in place, there will be an AI arms race 15Dec2016

The head of Alphabets innovation lab talks about its latest "moonshot" projects 8Dec2016

Maluuba sees reading comprehension and conversation as key to true AI. It's built a new way to train AIs on those skills 1Dec2016

Game theorist shows how pedestrians will exploit self-driving cars' built-in yen to yield 26Oct2016

At the White House Frontiers Conference, Stanford's Li details three crucial reasons to increase diversity in AI 19Oct2016

View post:

Texas Hold'em AI Bot Taps Deep Learning to Demolish Humans - IEEE Spectrum

Posted in Ai | Comments Off on Texas Hold’em AI Bot Taps Deep Learning to Demolish Humans – IEEE Spectrum

13 ways AI will change your life – TNW

Posted: at 1:16 am

From helping you take care of email to creating personalized online shopping experiences, AI promises to transform the way we live and work.

But with all the hype out there, how do we know which benefits well actually see? In order to learn more, I asked a few members of YECthe following question:

Gary Vaynerchuk was so impressed with TNW Conference 2016 he paused mid-talk to applaud us.

What is the top benefit you predict emerging from AI, and do you think the overall benefits will live up to the hype?

The greatest benefit of AI which is already emerging is the elimination of repetitive tasks. From chat bots that can free up human staffers times to work on more complex issues, to scheduling AIs like x.ai that eliminate the need to schedule meetings, AI will ultimately help humans spend more time focusing on creative and high-mental-effort activities. Brittany Hodak,ZinePak

I think the benefits of deeper personalization in terms of the ability to understand what each customer really wants and is interested in can be achieved through AI over time. It will live up to the hype because its already being used in some degree to illustrate how personalization is possible and how AI saves considerable time in getting to a deeper level of understanding of each customer. Angela Ruth, Due

AI will save companies considerable time by doing tasks and collecting data as well as providing decisions based on that data much faster than human beings can do. It seems quite possible that AI has the capability of doing so much more than we can on many levels. Its an exciting time to watch the changes that AI brings. Murray Newlands,Sighted

AI will enable us to interact with information as if were interacting with a knowledgeable individual. We wont have to look at a screen to learn about anything, we can simply converse with AI. SIRI is already a reliable personal assistant when it comes to setting reminders, alarm clocks, sending texts, etc. AI will make it possible for us to do virtually anything with voice command. Andrew Namminga,Andesign

The biggest change thats coming is the move from humans using software as a tool, to humans working with software as team members. Software will monitor things, alert humans, and execute basic tasks without human intervention. This will free human time for the really creative or interesting tasks and greatly improve business. A.I. is going to have a much larger impact than the hype. Brennan White,Cortex

I think the greatest advantage of AI is the automation of tasks that will free up employees to focus on strategic initiatives. On the other hand, I dont think it will be as big as predicted. There are still too many tasks that need a human touch to make them successful. Well see great benefit from AI in the more mundane areas, but youll always need the human brain for some tasks. Nicole Munoz,Start Ranking Now

One of the top benefits will be the emergence of personalized medicine. Rather than a one-size-fits-all approach, doctors will be able to tailor treatment on an individual basis and prescribe the right treatments and procedures based on your medical history. As far as living up to hype, yes definitely. Though as with many new technologies its more of a question of whenratherthan if. Kevin Yamazaki,Sidebench

No, tomorrows AI wont live up to the hype. Freeing ordinary folks from repetitive tasks and giving them personal assistants only allows people to busy themselves with other, more complex tasks. The resulting productivity will mark incremental gains for business owners, but nothing on par with the digital revolution and the industrial one before it. For that, well have to wait for the robots. Manpreet Singh,TalkLocal

With each wave of technology advancement, the quality of life for the world overall has increased. With AI, we will have better personalized healthcare, more efficient energy use, enhanced food production capabilities, improved jobs with less mundane work, and more. People will lead longer and more high quality lives. Adelyn Zhou,TOPBOTS

I believe it will be more like the science fiction movies, where we will maintain and work with the machines that do the work. However, these jobs will come with a level of prestige, as most people will probably live off a government sponsored socialism system. With AI and automation replacing so many jobs in the next 20 years, we will have to change social systems in order to adapt. Andy Karuza,FenSens

While AI is critical for self-driving cars, the military, commerce, AI-driven SEO and gaming, its poised to make the most human impact in medicine and human behavior. Imagine the UN leveraging neural networks and deep learning to discover what helps some communities thrive and others fall behind. Those lessons can then be leveraged into community builders, city planners, grants and projects. Gideon Kimbrell,InList Inc

Artificial intelligence based home automation is the future. If everyone in the United States installed Nest or a similar smart thermostat, they would collectively save hundreds of millions of dollars annually in wasted energy since Nest is able to learn when people are orare not home. Nest and others automatically adjust temperature saving on energy use and costs. Kristopher Jones,LSEO.com

Artificial Intelligence will do wonders to help automate processes that, today, take time and manual labor but dont contribute much to the bottom line or moving forward as a company. Automation will allow additional time and resources to be dedicated to what companies need to focus their energy on: customer experience. Andrew Kucheriavy,Intechnic

Read next: Heres everything you need to know about the state of autonomous cars

Read the original post:

13 ways AI will change your life - TNW

Posted in Ai | Comments Off on 13 ways AI will change your life – TNW

Toyota’s billion-dollar AI research center has a new self-driving car – The Verge

Posted: at 1:16 am

The Toyota Research Institute (TRI) showed its first self-driving car this week, a Lexus LS 600hL test vehicle equipped with LIDAR, radar, and camera arrays to enable self-driving without relying too heavily on high-definition maps.

The vehicle is the base for two of TRIs self-driving research paths: Chauffeur and Guardian. Chauffeur is research into Level 4 self-driving, where the car is restricted to certain geographical areas like a city or interstates, as well as Level 5 autonomy, which would work anywhere. Guardian is a driver-assist system that monitors the environment around the vehicle, alerting the driver to potential hazards and stepping in to assist with crash avoidance when necessary.

Toyota thinks Guardians research will be deployed more quickly than Chauffeur. Similar tech is available in many cars today in safety features like Automatic Emergency Braking.

The car is part of a billion-dollar investment Toyota announced in late 2015 into the TRI, which has a mandate to develop AI technologies for autonomous cars and robot helpers for the home. The Institute has its headquarters near Stanford in California and satellite facilities near MIT in Massachusetts and the University of Michigan campus in Ann Arbor.

Read the original:

Toyota's billion-dollar AI research center has a new self-driving car - The Verge

Posted in Ai | Comments Off on Toyota’s billion-dollar AI research center has a new self-driving car – The Verge

AI won’t kill you, but ignoring it might kill your business, experts say … – Chicago Tribune

Posted: at 1:16 am

Relax. Artificial intelligence is making our lives easier, but won't be a threat to human existence, according to panel of practitioners in the space.

"One of the biggest misconceptions today about autonomous robots is how capable they are," said Brenna Argall, faculty research scientist at the Rehabilitation Institute of Chicago, during a Chicago Innovation Awards eventWednesday.

"We see a lot of videos online showing robots doing amazing things. What isn't shown is the hours of footage where they did the wrong thing," she said. "The reality is that robots spend most of their time not doing what they're supposed to be doing."

The event at Studio Xfinity drew about 200 people, who mingled among tech exhibits before contemplating killer robot overlords.

Stephen Pratt, a former IBM employee who was then responsible for the global implementation of Watson, also was quick to swat down the notion that machines are poised to run the world.

The tech insteadgives better ways to improve services, products and business, hesaid besting humans in applications dealing with demand predictions, pricing, inventory, retail promotion, logistics and preventive maintenance.

"Amplifying human intelligence, and overcoming human cognitive biases I think that's where it fits," said Pratt, founder and CEO of business consultancy Noodle.ai. "Humans are really bad probabilistic thinkers and statisticians. That's where cognitive bias creeps in and, therefore, inefficiencies and lost profit."

But machineswon't replace humans when it comes to big-picture decisions, he said.

"Those algorithms are not going to set the strategy for the company. It'll help you make the decision once I come up with the idea," Pratt said. "But any executive that doesn't have a supercomputer in the mix now on their side and they're stuck in the spreadsheet era your jobs are going to be in jeopardy in a few years."

It'll be up to machines to decipher those spreadsheets anyway, as so much data is being collected it would be overwhelming for humans to understand, said Kris Hammond, co-founder of Chicago AI company Narrative Science.

"We're no longer looking at a world with a spreadsheet with 20 columns and 50 rows. We're now looking at spreadsheets of thousands of columns and millions of rows," said Hammond, founder of the University of Chicago's Artificial Intelligence Laboratory. "The only way we can actually understand what's going on in the world is to have systems that look at that data, understand what they mean and then turn it into something we can understand."

Mike Shelton, technical director for Microsoft's Azure Data Services, said it's also a time saver.

"What I see every day is it's giving time back," he said. "Through an AI interface, I can ask a question in speech or text and get a response through that without having to go search for a web page or hunt for information."

Julie Friedman Steele , CEO of the World Future Society, said her organization is focusing on the advances that could be made using AI in education, where teachers in crowded classrooms can't give much attention to students individually.

"As a human, can you actually learn all the knowledge that you might have a student interested in learning?" said Steele, who's also CEO and founder of The 3D Printer Experience. "I'm not talking about there not being a human in the room and it's all robots. I'm just saying that there's an opportunity in education with artificial intelligence so that if a teacher doesn't know something, it's OK."

Cheryl V. Jackson is a freelance writer. Twitter@cherylvjackson

The rest is here:

AI won't kill you, but ignoring it might kill your business, experts say ... - Chicago Tribune

Posted in Ai | Comments Off on AI won’t kill you, but ignoring it might kill your business, experts say … – Chicago Tribune

Page 270«..1020..269270271272..280..»