Page 269«..1020..268269270271..280..»

Category Archives: Ai

RISE OF THE MACHINES: AI computers learn to code THEMSELVES in major development – Express.co.uk

Posted: March 7, 2017 at 10:20 pm

GETTY

Microsoft and Cambridge University have teamed up to create AI software which has the ability to write code for itself.

The sophisticated machine known as DeepCoder has the ability to solve its programming problems by stealing codes from other programs.

The research paper from the two establishments says that the development is a huge step towards powerful AI and will also allow people to develop programs much easier.

The paper reads: We have found several problems in real online programming challenges that can be solved with a program in our language.

GETTY

Jason Dorfman, MIT CSAIL

1 of 9

"A dream of artificial intelligence is to build systems that can write computer programs.

Coding has been described as one of the most important skills of the future, and a recent survey from job markets firm Burning Glass found that as many as seven million job openings in 2015 required some form of coding skills.

GETTY

But with AI now having the ability to code itself, it could put many budding coders out of work.

A recent report from the United Nations (UN) revealed AI is set to displace millions of workers across the globe as scientists storm towards making machines with human-level intelligence.

While many firms will welcome the news of free labour that will be more efficient than humans, it will leave many people worried about their economic future.

The report from the UN warn that people in the developing world are at the most risk of losing their jobs to disruptive technologies and the study states that the process is already in full swing.

See original here:

RISE OF THE MACHINES: AI computers learn to code THEMSELVES in major development - Express.co.uk

Posted in Ai | Comments Off on RISE OF THE MACHINES: AI computers learn to code THEMSELVES in major development – Express.co.uk

IBM, Salesforce Strike Global Partnership on Cloud, AI – Fortune

Posted: at 10:20 pm

Are two clouds better than one?

How about two nerdily-named artificial intelligence platforms?

According to IBM and Salesforce , the answer to both of those questions is yes.

The two Fortune 500 companies on Monday afternoon revealed a sweeping global strategic partnership that aligns one iconic company's multiyear turnaround effort with another's staggering growth ambitions . According to the terms of the deal, IBM and Salesforce will integrate artificial intelligence platforms (Watson and Einstein, respectively) and some of their software and services (e.g. a Salesforce component to ingest The Weather Companys meteorological data). IBM will also deploy Salesforce Service Cloud internally in a sign of goodwill.

Why not go it alone? Fortune spoke on the phone with IBM CEO Ginni Rometty and Salesforce CEO Marc Benioff to get a better understanding of the motives behind the deal. What follows is a transcript of that conversation, edited and condensed.

Fortune : Hi, guys. So what's this all about?

Benioff : It's great to connect with you again. Artificial intelligence is really accelerating our customers' success and they're finding tremendous value in this new technology. The spring release of Salesforce Einstein has opened our eyes to what's possible. We now have thousands of customers who have deployed this next-generation artificial intelligence capability. I'll tell you, specifically with our Sales Cloud customers, it creates this incredible level of productivity. Sales executives are way more productive than ever beforethe ability to do everything from lead scoring to opportunity insights really opened my eyes that this is possible. So the more value in artificial intelligence we can provide our customers, the more successful they'll be, which is why we're doing this relationship with IBM.

We're able to give our customers the incredible capabilities of not only Einstein but Watson. When you look at the industries we cater toretail, financial services, healthcarethe data and insights that Watson can provide our customers is really incredible. And we're also thrilled that IBM has agreed to use Salesforce products internally as well. This is really taking our relationship to a whole new level.

Rometty : Andrew, thank you for taking the time. This announcement is both strategic and significant. I do think it's really going to take AI further into the enterprise. I think about 2017 as the year when we're going to see AI hit the world at scale. It's the beginning of an era that's going to run a decade or decades in front of us. Marc's got thousands of clients; by the end of this year we'll have a billion people touched by Watson. We both share that vision. An important part of it is the idea that every professional can be aided by this kind of technology. It takes all the noise and turns it into something on which they can take action. It isn't just a sales process; we're going to link other processes across a company. We're talking about being able to augment what everyone doesaugment human intelligence. Together, this will give us the most complete understanding of a customer anywhere.

For our joint customers, to me, this is a really big deal. Take an insurance companyMarc's got plenty of them as clients. You link to insights around weather, hook that into a particular region, tell people to move their cars inside because of hail. You might even change a policy. These two things together do really allow clients to be differentiated.

This is the beginning of a journey together.

I thought this was the brainiest deal I've ever heard of, with Watson and Einstein together.

Rometty : It's good comedy.

Like any two large tech companies, you compete in areas and collaborate in othersfrenemies. Why did you engage in this partnership? Any executive asks themselves: build, buy, or partner. Why partner this time?

Benioff : I'll give you my honest answer here, which is that I've always been a huge fan of IBMGinni knows that. When I look at pioneering values in businesscompanies that have done it right and really stuck to their principles over generationsI really look to IBM as a company that has deeply inspired me personally as I built Salesforce over the last 18 years. We're going to be 18 years old on March 8th. When I look at what we've gone through in the last two decades, I really think that it's our values that have guided us and how those values have been inspired by many of the things at IBM.

Number two is, Ginni made a strategic decision to acquire Bluewolf, which is a company that we had worked very hard to nurture and incubate over a very long period of time. It really demonstrated to me that the opportunity to form a strategic relationship with IBM was possible. We both have this incredible vision for artificial intelligence but we're coming at it from very different areas. [Salesforce is] coming at it from a declarative standpoint, expressed through our platform, for our customer relationship management system. IBM's approach, which is pioneering, especially when it comes to key verticals like retail or finance or healthcarethese are complements. These are the best of both worlds for artificial intelligence. These are the two best players coming together. We have almost no overlap in our businesses. We have really a desire to make our customers more successful.

Rometty : Beautifully said. And I'll only add a couple of points. Not only sharing values as companies but in terms of how we look at our customers. We share over 5,000 joint clients. But more importantly, think about this era of AI. There are different approaches you can take. What Marc's done with Einsteinthink of it as CRM as a process. What we've done with Watsonthink of it as an industry in depth. We do have very little overlap. Why we talk about Watson as a platform is to be integrated with things like what Marc's doing.

Let me ask you about AI. It's been in development for decades, but the current wave is nascent. How do you each see AI as part of the success of your companies? It's a capabilityno one goes to the store to buy AI. Hopefully it solves their problems. But AI can be anything.

Rometty : I view AI as fundamental to IBM. Watson on the IBM cloudthat's a fundamental and enduring platform. We've built platforms for ourselves before: mainframe, middleware, managed services. This is now the era of AI. It will be a silver thread through all of what IBM does.

Is it fair to say that you guys aren't trying to compete on AI? I don't mean between youI mean within the greater industry.

Rometty : We're absolutely complementary. Clients will make some architectural decisions here. Everyone's gonna pick some platforms to use. They will pick them around AI. By the way, there are stages: the most basic is machine learning, then AI, then cognitive [computing]. What we're doing with Marc goes all the way into cognitive. Just to be clear.

Benioff : I could not agree more. We brought our customers into the cloud, then into the social world, then into the mobile world. Now we're bringing them into the AI world.

This is really beyond my wildest dreams in terms of what's possible today. And by the waythat we're able to replace Microsoft's products [at IBM] is a bonus for us. (laughs)

Read more here:

IBM, Salesforce Strike Global Partnership on Cloud, AI - Fortune

Posted in Ai | Comments Off on IBM, Salesforce Strike Global Partnership on Cloud, AI – Fortune

AI: the promise of big data – ZDNet

Posted: at 10:20 pm


ZDNet
AI: the promise of big data
ZDNet
You'll be aware of the recent explosion of interest in AI, and most likely have seen it in action too, in the form of smart assistants on phones, desktops (think Cortana) and elsewhere. Pretty soon, the ability of machines to achieve a greater ...

More:

AI: the promise of big data - ZDNet

Posted in Ai | Comments Off on AI: the promise of big data – ZDNet

Why AI will determine the future of fintech – TNW

Posted: at 10:20 pm

More investors are setting their sights on the financial technology (Fintech) arena. According to consulting firm Accenture, investment in Fintech firms rose by 10 percent worldwide to the tune of $23.2 billion in 2016.

China is leading the charge after securing $10 billion in investments in 55 deals which account for 90 percent of investments in Asia-Pacific. The US came second taking in $6.2 billion in funding. Europe, also saw an 11 percent increase in deals despite Britain seeing a decrease in funding due to the uncertainty from the Brexit vote.

Run an early-stage company? We're inviting 250 to exhibit at TNW Conference and pitch on stage!

The excitement stems from the disruption of traditional financial institutions (FIs) such as banks, insurance, and credit companies by technology. The next unicorn might be among the hundreds of tech startups that are giving Fintech a go.

What exactly is going to be the next big thing has yet to be determined, but other developments in computing like artificial intelligence (AI) may play a huge part.

The growing reality is that, while opportunities are abound, competition is also heating up.

Take, for example, the number of Fintech startups that aim to digitize routine financial tasks like payments. In the US, the digital wallet and payments segment is fiercely competitive. Pioneers like PayPal see themselves being taken on by other tech giants like Google and Apple, by niche-oriented ventures like Venmo, and even by traditional FIs.

Some ventures are seeing bluer oceans by focusing on local and regional markets where conditions are somewhat favorable.

The growth of Chinas Fintech was largely made possible by the relative age of its current banking system. It was easier for people to use mobile and web-based financial services such as Alibabas Ant Financial and Tencent since phones were more pervasive and more convenient to access than traditional financial instruments.

In Europe, the new Payment Services Directive (PSD2) set to take effect in 2018 has busted the game wide open. Banks are obligated to open up their application program interfaces (APIs) enabling Fintech apps and services to tap into users bank accounts. The line between banks and fintech companies are set to blur so just about everyone in finance is set to compete with old and new players alike.

Convenience has become a fundamental selling point to many users that a number of Fintech ventures have zeroed in on delivering better user experiences for an assortment of financial tasks such as payments, budgeting, banking, and even loan applications.

There is a mad scramble among companies to leverage cutting-edge technologies for competitive advantage. Even established tech companies like e-commerce giant Amazon had to give due attention to mobile as users shift their computing habits towards phones and tablets. Enterprises are also working on transitioning to cloud computing for infrastructure.

But where do more advanced technologies such as AI come in?

The drive to eliminate human fallibility has also made artificial intelligence (AI) driven to the forefront of research and development. Its applications range from sorting what gets shown on your social media newsfeed to self-driving cars. Its also expected to have a major impact in Fintech due to potential of game changing insights that can be derived from the sheer volume of data that humanity is generating. Enterprising ventures are banking on it to expose the gap in the market that has become increasingly small due to competition.

AI and finance are no strangers to each other. Traditional banking and finance have relied heavily on algorithms for automation and analysis. However, these were exclusive only to large and established institutions. Fintech is being aimed at empowering smaller organizations and consumers, and AI is expected to make its benefits accessible to a wider audience.

AI has a wide variety of consumer-level applications for smarter and more error-free user experiences. Personal finance applications are now using AI to balance peoples budgets based specifically to a users behavior. AI now also serves as robo-advisors to casual traders to guide them in managing their stock portfolios.

For enterprises, AI is expected to continue serving functions such as business intelligence and predictive analytics. Merchant services such as payments and fraud detection are also relying on AI to seek out patterns in customer behavior in order to weed out bad transactions.

People may soon have very little excuse of not having a handle of their money because of these services

Despite the exciting potential AI brings, there are still caveats. A big challenge for Fintech is to develop AI to be as smart as it can. There will be no shortage of people who will try to game and outwit such systems.

While AI seeks to eliminate human error, the flipside losing the human touch is a common criticism of AI. Smart money decisions are best made through numbers and logic. However, people do have an emotional connection with their money so it will be a challenge for Fintech apps to create experiences that do not alienate its users. Take the sad stories of insurance claims being denied due to strict algorithms that disregard the nuances of the human condition. AI still has a way to go factoring in what is just and moral in its decision making.

As for finance as a field and industry, there is also the issue of financial analysts, advisors, bankers, and traders being threatened to obsolescence by AI. A running joke with AI alludes to the Terminator movie franchise where AI seeks to eliminate humanity from existence. Unemployment, however, is rarely a laughing matter.

With the stiff competition in Fintech, ventures have to deliver a truly valuable products and services in order to stand out. The venture that provides the best user experience often wins but finding this X factor has become increasingly challenging.

The developments in AI may provide that something extra especially if it could promise to eliminate the guess work and human error out of finance. Its for these reasons that AI might just hold the key to what further Fintech innovations can be made.

This post is part of our contributor series. It is written and published independently of TNW.

Read next: TNW's 5 rules for writing the perfect cold email

The rest is here:

Why AI will determine the future of fintech - TNW

Posted in Ai | Comments Off on Why AI will determine the future of fintech – TNW

What Would an AI Doomsday Actually Look Like? – Futurism

Posted: at 10:20 pm

Imagining AIs Doomsday Artificial intelligence (AI) is going to transform the world, but whether it will be a force of good or evil is still subject to debate. To that end, a team of experts gathered for Arizona State Universitys (ASU) Envisioning and Addressing Adverse AI Outcomes to talk about the worst-case scenarios that we could face if AI veers towards becoming a serious threat to humanity.

There is huge potential for AI to transform so many aspects of our society in so many ways. At the same time, there are rough edges and potential downsides, like any technology, says AI scientist Eric Horvitz.

As an optimistic supporter of everything AI has to offer, Horvitz has a very positive outlook about the future of AI. But hes also pragmatic enough to recognize that for the technology to consistently advance and move forward, it has to earn the trust of the public. For that to happen, all possible concerns surrounding the technology have to be discussed.

That conversation specificallywas what the workshop hoped to tackle.40 scientists, cyber-security experts, and policy-makers were divided into two teams to hash out the numerous ways AI can cause trouble for the world. The red team were tasked with imagining all the cataclysmic scenarios AI could incite, and the blue team was asked to devisesolutions to defend against such attacks.

These situations had to be realistic rather than purely hypothetical, anchored in whats possible given our current technology, and what we expect to come from AI over the next few decades.

Among the scenarios described were automated cyber attacks (wherein a cyber weapon is intelligent enough to hide itself after an attack and prevent all efforts to destroy it), stock markets being manipulated by machines, self-driving technology failing to recognize critical road signs, and AI being used to rig or sway elections.

Not all scenarios were given sufficient solutions either, illustrating just how unprepared we are at present to face the worse possible situations that AI could bring. For example, in the case of intelligent, automated cyber attacks, it would apparently be quiteeasy for attackers to use unsuspecting internet gamers to cover their tracks, using something like an online game toobscure the attacks themselves.

As entertaining as it may be to think up all of these wild doomsday scenarios, its actually a deliberate first step towards real conversations and awareness about the threat that AI could pose. John Launchbury, from the US Defenses Advanced Research Projects Agency hopes it will lead to concrete agreements on rules of engagement for cyber war, automated weapons, and robot troops.

The purpose of the workshop after all, isnt to incite fear, but to realistically anticipate the various possibilities of how technology can be misused and hopefully, get a head start on defending ourselves againstit.

Here is the original post:

What Would an AI Doomsday Actually Look Like? - Futurism

Posted in Ai | Comments Off on What Would an AI Doomsday Actually Look Like? – Futurism

Search Earth with AI eyes via a powerful new satellite image tool – CNET

Posted: at 10:20 pm

A GeoVisual search for baseball stadiums in the lower 48.

Want to know where all the wind and solar power supplies in the US are for some brilliant renewable-energy project? Or plot a round-the-world trip hitting every major soccer stadium along the way? It should be possible with a new tool that lets anyone scan the globe through AI "eyes" to instantly find satellite images of matching objects.

Descartes Labs, a New Mexico startup that provides AI-driven analysis of satellite images to governments, academics and industry, on Tuesday released a public demo of its GeoVisual Search, a new type of search engine that combines satellite images of Earth with machine learning on a massive scale.

The idea behind GeoVisual is pretty simple. Pick an object anywhere on Earth that can be seen from space, and the system returns a list of similar-looking objects and their locations on the planet. It's cool to play with, which you can do at the Descartes site here. A short search for wind turbines had me dreaming of a family road trip where every pit stop was sure to include kite-flying for the kids.

Perhaps this sounds just like Google Earth to you, but keep in mind that tool just allows you to find countless geotagged locations around the world. GeoVisual Search actually compares all the pixels making up huge photos of the world to find matching objects as best it can, an ability that hasn't been available to the public before on a global scale.

Mark Johnson, Descartes Lab CEO

Fun as it is, the tool also gives the public a taste of Descartes' broader work, which so far has focused largely on agricultural datasets that can do things like analyze crop yields.

"The goal of this launch is to show people what's possible with machine learning. Our aim is to use this data to model complex planetary systems, and this is just the first step," CEO and co-founder Mark Johnson said via email. "We want businesses to think about how new kinds of data will help to improve their work. And I'd like everyone to think about how we can improve our life on this planet if we better understood it."

The tool's not perfect. I tried searching for objects that look similar to a large coal mine and power plant here in northern New Mexico and ended up with a list of mostly similar-looking lakes and bridges. Searching for locations similar to the launch pads at Cape Canaveral returned an odd assortment of landscapes that seemed to have nothing in common besides a passing resemblance to concrete surfaces.

The algorithm can easily mistake a whole lot of coal for a whole lots of water.

"Though this is a demo, GeoVisual Search operates on top of an intelligent machine-learning platform that can be trained and will improve over time," Johnson said. "We've never taught the computer what a wind turbine is, it just determines what's unique about that image (i.e., the fact there is a wind turbine there) and automatically recognizes visually similar scenes."

Right now the demo relies on three different imagery sources that include more than 4 petabytes of data altogether. You can search in the most detail using the National Agriculture Imagery Program (NAIP) data for the lower 48 United States because it has the highest resolution of one meter per pixel, making it possible to spot orchards, solar farms and turbines, among other objects.

Four-meter imagery is available for China that makes it possible to recognize slightly larger things like stadiums. For the rest of the world, Descartes uses 15-meter resolution images from Landsat 8 that are more coarse but still allow for identification of larger-scale objects like pivot irrigation and suburbs.

"As a next step, we certainly want to start to understand specific objects and count them accurately through time," Johnson said. "At that point, we'll have turned satellite imagery into a searchable database, which opens up a whole new interface for dealing with planetary data."

10

Earth's recent changes, from space (pictures)

Descartes was spun out of Los Alamos National Lab (LANL) and co-founded by Steven Brumby, who spent over a decade working in information sciences for the lab. Near the start of his time at LANL, a massive wildfire nearly destroyed the lab and Brumby's home. More importantly, it sparked Brumby's interest in developing machine-learning tools to map the world's fires.

"At that time when we did the analysis (of satellite images of the fire's aftermath) it was pretty clear the fire had been catastrophic, but there was a lot of fuel left," Brumby told me when I visited Descartes' offices in Los Alamos last year.

When some of that remaining fuel burned in another big Los Alamos wildfire in 2011, Brumby says he was able to help out. During his time at LANL he was often called on for imagery analysis when disaster struck, from 9/11 to Hurricane Katrina and the breakup of the Space Shuttle Columbia. All those years of insight led to another Descartes project to analyze satellite imagery to better understand and perhaps even predict wildfires around the globe.

"You can use satellite imagery to warn you of stuff that's coming down the road and if you listen to it, you can be prepared for it," Brumby said.

Before and after the 2000 Cerro Grande fire, with the burn scar shown in bright red.

Brumby and Johnson spent the better part of an afternoon laying out the short- and long-term vision for Descartes Labs when I visited. In the short term, the company has been working in agriculture to better monitor crops, feed lots and other data sources.

"One of the things we're building with our current system is a continuously updating living map of the world, which is the platform I wish we had when we had to deal with some of these disasters back in the day," Brumby said.

Being able to check in on any part of the world in real time is one thing, but Descartes hopes to go further by applying artificial intelligence to see things in all those images that might not be immediately obvious to our eyes: the patterns that tie together all the activities captured in those countless pixels.

If a picture really is worth a thousand words, tools like the ones Descartes is developing could help write volumes about what our satellites are really seeing.

Solving for XX: The industry seeks to overcome outdated ideas about "women in tech."

Crowd Control: A crowdsourced science fiction novel written by CNET readers.

Read the rest here:

Search Earth with AI eyes via a powerful new satellite image tool - CNET

Posted in Ai | Comments Off on Search Earth with AI eyes via a powerful new satellite image tool – CNET

The Future Of AI With Alexy Khrabrov – Forbes

Posted: March 6, 2017 at 3:14 pm


Forbes
The Future Of AI With Alexy Khrabrov
Forbes
Alexy Khrabrov doesn't just want to tell people about AI. He wants to show you, immerse you and get you as excited as he is. The founder and CEO of By the Bay and Chief Scientist at the Cicero Institute has made a career out of not only understanding ...

Read more:

The Future Of AI With Alexy Khrabrov - Forbes

Posted in Ai | Comments Off on The Future Of AI With Alexy Khrabrov – Forbes

More Bad News for Gamblers AI WinsAgain – HPCwire (blog)

Posted: at 3:14 pm

AI-based poker playing programs have been upping the ante for lowly humans. Notably several algorithms from Carnegie Mellon University (e.g. Libratus, Claudico, and Baby Tartanian8) have performed well. Writing in Science last week, researchers from the University of Alberta, Charles University in Prague and Czech Technical University report their poker algorithm DeepStack is the first computer program to beat professional players in heads-up no-limit Texas holdem poker.

Sorting through the firsts is tricky in the world of AI-game playing programs. What sets DeepStack apart from other programs, say the researchers, is its more realistic approach at least in games such as poker where all factors are never full known think bluffing, for example. Heads-up no-limit Texas holdem (HUNL) is a two-player version of poker in which two cards are initially dealt face down to each player, and additional cards are dealt face-up in three subsequent rounds. No limit is placed on the size of the bets although there is an overall limit to the total amount wagered in each game.

Poker has been a longstanding challenge problem in artificial intelligence, says Michael Bowling, professor in the University of Albertas Faculty of Science and principal investigator on the study. It is the quintessential game of imperfect information in the sense that the players dont have the same information or share the same perspective while theyre playing.

Using GTX 1080 GPUs and CUDA with the Torch deep learning framework, we train our system to learn the value of situations, says Bowling on an NVIDIA blog. Each situation itself is a mini poker game. Instead of solving one big poker game, it solves millions of these little poker games, each one helping the system to refine its intuition of how the game of poker works. And this intuition is the fuel behind how DeepStack plays the full game.

In the last two decades, write the researchers, computer programs have reached a performance that exceeds expert human players in many games, e.g., backgammon, checkers, chess, Jeopardy!, Atari video games, and go. These successes all involve games with information symmetry, where all players have identical information about the current state of the game. This property of perfect information is also at the heart of the algorithms that enabled these successes, write the researchers.

We introduce DeepStack, an algorithm for imperfect information settings. It combines recursive reasoning to handle information asymmetry, decomposition to focus computation on the relevant decision, and a form of intuition that is automatically learned from self-play using deep learning.

In total 44,852 games were played by the thirty-three players with 11 players completing the requested 3,000 games, according to the paper. Over all games played, DeepStack won 492 mbb/g. This is over 4 standard deviations away from zero, and so, highly significant. According to the authors, professional poker players consider 50 mbb/g a sizable margin. Using AIVAT to evaluate performance, we see DeepStack was overall a bit lucky, with its estimated performance actually 486 mbb/g.

(For those of us less prone to take a seat at the Texas holdem poker table, mbb/g equals milli-big-blinds per game or the average winning rate over a number of hands, measured in thousandths of big blinds. A big blind is the initial wager made by the non-dealer before any cards are dealt. The big blind is twice the size of the small blind; a small blind is the initial wager made by the dealer before any cards are dealt. The small blind is half the size of the big blind.)

Its an interesting paper. Game theory, of course, has a long history and as the researchers note, The founder of modern game theory and computing pioneer, von Neumann, envisioned reasoning in games without perfect information. Real life is not like that. Real life consists of bluffing, of little tactics of deception, of asking yourself what is the other man going to think I mean to do. And that is what games are about in my theory. One game that fascinated von Neumann was poker, where players are dealt private cards and take turns making bets or bluffing on holding the strongest hand, calling opponents bets, or folding and giving up on the hand and the bets already added to the pot. Poker is a game of imperfect information, where players private cards give them asymmetric information about the state of game.

According to the paper, DeepStack algorithm is composed of three ingredients: a sound local strategy computation for the current public state, depth-limited look-ahead using a learned value function to avoid reasoning to the end of the game, and a restricted set of look-ahead actions. At a conceptual level these three ingredients describe heuristic search, which is responsible for many of AIs successes in perfect information games. Until DeepStack, no theoretically sound application of heuristic search was known in imperfect information games.

The researchers describe DeepStacks architecture as a standard feed-forward network with seven fully connected hidden layers each with 500 nodes and parametric rectified linear units for the output. The turn network was trained by solving 10 million randomly generated poker turn games. These turn games used randomly generated ranges, public cards, and a random pot size. The flop network was trained similarly with 1 million randomly generated flop games.

Link to paper: http://science.sciencemag.org/content/early/2017/03/01/science.aam6960.full

Link to NVIDIA blog: https://news.developer.nvidia.com/ai-system-beats-pros-at-texas-holdem/

Go here to see the original:

More Bad News for Gamblers AI WinsAgain - HPCwire (blog)

Posted in Ai | Comments Off on More Bad News for Gamblers AI WinsAgain – HPCwire (blog)

Astronomers Deploy AI to Unravel the Mysteries of the Universe – WIRED

Posted: at 3:14 pm

Slide: 1 / of 1. Caption: Brad Goldpaint/Getty Images

Astronomer Kevin Schawinski has spent much of his career studying how massive black holes shape galaxies. But he isnt into dirty workdealing with messy dataso he decided to figure out how neural networks could do it for him. Problem is, he and his cosmic colleagues suck at that sophisticated kind of coding.

That changed when another professor at Schawinskis institution, ETH Zurich, sent him an email and CCed Ce Zhang, who actually is a computer scientist. You guys should talk, the email said. And they did: Together, they plotted how they could take leading-edge machine-learning techniques and superimpose them on the universe. And recently, they released their first result: a neural network that sharpens up blurry, noisy images from space. Kind of like those scenes in CSI-type shows where a character shouts Enhance! Enhance! at gas station security footage, and all of a sudden the perps face resolves before your eyes.

Schawinski and Zhangs work is part of a larger automation trend in astronomy: Autodidactic machines can identify, classify, andapparentlyclean up their data better and faster than any humans. And soon, machine learning will be a standard digital tool astronomers can pull out, without even needing to grasp the backend.

In their initial research, Schawinski and Zhang came across a kind of neural net that, in an example, generated original pictures of cats after learning what cat-ness is from a set of feline images. It immediately became clear, says Schawinski.

This feline-friendly system was called a GAN, or generative adversarial network. It pits two machine-brainseach its own neural networkagainst each other. To train the system, they gave one of the brains a purposefully noisy, blurry image of a cat galaxy and then an unmarred version of that same galaxy. That network did its best to fix the degraded galaxy, making it match the pristine one. The second half of the network evaluated the differences between that fixed image and the originally OK one. In test mode, the GAN got a new set of scarred pictures and performed computational plastic surgery.

Once trained up, the GAN revealed details that telescopes werent sensitive enough to resolve, like star-forming spots. I dont want to use a clich phrase like holy grail, says Schawinski, but in astronomy, you really want to take an image and make it better than it actually is.

When I asked the two scientists, who Skyped me together on Friday, whats next for their silicon brains, Schawinski asked Zhang, How much can we reveal? which suggests to me they plan to take over the world.

They went on to say, though, that they dont exactly know, short-term (or at least theyre not telling). Long-term, these machine learning techniques just become part of the arsenal scientists use, says Schawinski, in a kind of ready-to-eat form. Scientists shouldnt have to be experts on deep learning and have all the arcane knowledge that only five people in the world can grapple with.

Other astronomers have already used machine learning to do some of their work. A set of scientists at ETH Zurich, for example, used artificial intelligence to combat contamination in radio data. They trained a neural network to recognize and then mask the human-made radio interference that comes from satellites, airports, WiFi routers, microwaves, and malfunctioning electric blankets. Which is good, because the number of electronic devices will only increase, while black holes arent getting any brighter.

Neural networks need not limit themselves to new astronomical observations, though. Scientists have been dragging digital data from the sky for decades, and they can improve those old observations by plugging them into new pipelines. With the same data people had before, we can learn more about the universe, says Schawinski.

Machine learning also makes data less tedious to process. Much of astronomers work once involved the slog of searching for the same kinds of signals over and overthe blips of pulsars, the arms of galaxies, the spectra of star-forming regionsand figuring out how to automate that slogging. But when a machine learns, it figures out how to automate the slogging. The code itself decides that galaxy type 16 exists and has spiral arms and then says, Found another one! As Alex Hocking, who developed one such system, put it, the important thing about our algorithm is that we have not told the machine what to look for in the images, but instead taught it how to see.

A prototype neural network that pulsar astronomers developed in 2012 found 85 percent of the pulsars in a test dataset; a 2016 system flags fast radio burst candidates as human- or space-made, and from a known source or from a mystery object. On the optical side, a computer brainweb called RobERtRobotic Exoplanet Recognitionprocesses the chemical fingerprints in planetary systems, doing in seconds what once took scientists days or weeks. Even creepier, when the astronomers asked RobERt to dream up what water would look like, he, uh, did it.

The point, here, is that computers are better and faster at some parts of astronomy than astronomers are. And they will continue to change science, freeing up scientists time and wetware for more interesting problems than whether a signal is spurious or a galaxy is elliptical. Artificial intelligence has broken into scientific research in a big way, says Schawinski. This is a beginning of an explosion. This is what excites me the most about this moment. We are witnessing anda little bitshaping the way were going to do scientific work in the future.

Original post:

Astronomers Deploy AI to Unravel the Mysteries of the Universe - WIRED

Posted in Ai | Comments Off on Astronomers Deploy AI to Unravel the Mysteries of the Universe – WIRED

Let’s get the network together: Improving lives through AI – Cloud Tech

Posted: at 3:14 pm

We have seen a machine master the complex game of Go, previously thought to be one of the most difficult challenge of artificial processing. We have witnessed vehicles operating autonomously, including a caravan of trucks crossing Europe with only a single operator to monitor systems. We have seen a proliferation of robotic counterparts and automated means for accomplishing a variety of tasks. All of this has given rise to a flurry of people claiming that the AI revolution is already upon us.

Understanding the growth in the functional and technological capability of AI is crucial for understanding the real world advances we have seen. Full AI, that is to say complete, autonomous sentience, involves the ability for a machine to mimic a human to the point that it would be indistinguishable from them (the so-called Turing test). This type of true AI remains a long way from reality. Some would say the major constraint to the future development of AI is no longer our ability to develop the necessary algorithms, but, rather, having the computing power to process the volume of data necessary to teach a machine to interpret complicated things like emotional responses. While it may be some time yet before we reach full AI, there will be many more practical applications of basic AI in the near term that hold the potential for significantly enhancing our lives.

With basic AI, the processing system, embedded within the appliance (local) or connected to a network (cloud), learns and interprets responses based on experience. That experience comes in the form of training through using data sets that simulate the situations we want the system to learn from. This is the confluence of machine learning (ML) and AI. The capability to teach machines to interpret data is the key underpinning technology that will enable more complex forms of AI that can be autonomous in their responses to input. It is this type of AI that is getting the most attention. In the next ten years, the use of this kind of ML-based AI will likely fall into two categories:

There is no doubt about the commercial prospects for autonomous robotic systems for applications like online sales conversion, customer satisfaction, and operational efficiency. We see this application already being advanced to the point that it will become commercially viable, which is the first step to it becoming practical and widespread. Simply put, if revenue can be made from it, it will become self-sustaining and thus continue to grow. The Amazon Echo, a personal assistant, has succeeded as a solidly commercial application of autonomous technology in the United States.

In addition to the automation of transportation and logistics, a wide variety of additional technologies that utilise autonomous processing techniques are being built. Currently, the artificial assistant or chatbot concept is one of the most popular. By creating the illusion of a fully sentient remote participant, it makes interaction with technology more approachable. There have been obvious failings of this technology (the unfiltered Microsoft chatbot, Tay, as a prime example), but the application of properly developed and managed artificial systems for interaction is an important step along the route to full AI. This is also a hugely important application of AI as it will bring technology to those who previously could not engage with technology completely for any number of physical or mental reasons. By making technology simpler and more human to interact with, you remove some of the barriers to its use that cause difficulty for people with various impairments.

The use of AI for development and discovery is just now beginning to gain traction, but over the next decade, this will become an area of significant investment and development. There are so many repetitive tasks involved in any scientific or research project that using robotic intelligence engines to manage and perfect the more complex and repetitive tasks would greatly increase the speed at which new breakthroughs could be uncovered.

View original post here:

Let's get the network together: Improving lives through AI - Cloud Tech

Posted in Ai | Comments Off on Let’s get the network together: Improving lives through AI – Cloud Tech

Page 269«..1020..268269270271..280..»