Page 208«..1020..207208209210..220230..»

Category Archives: Ai

Andrew Ng is raising a $150M AI Fund – TechCrunch

Posted: August 16, 2017 at 6:18 pm

We knew that Andrew Ng had more than just a series of deep learning courses up his sleeve when he announced the first phase of his deeplearning.ai last week. Its clear now that the turn of Ngs three part act is a $150 million venture capital fund, first noted by PEHub,targeting AI investments.

Ng, who formerly founded Googles Brain Team and served as chief scientist at Baidu has long evangelized the benefits AI could bring to the world. During an earlier conversation, Ng told me that his personal goal is to helpbring about an AI-powered society. It would follow that education via his deep learning classes is one step of that and providing capital and other resources is another.

2017 has been a particularly active year for starting AI-focused venture capital funds. In the last few months we have seen Google roll out Gradient Ventures,Basis Set Ventures hall in $136 million, Element.AI raise $102 million, Microsoft Ventures start its own AI fund and Toyota corral $100 million for AI investment.

Its unclear at this point how Ngs AI Fund will differentiate from the pack.Many of these funds are putting time and resources into securing data sets, technical mentors and advanced simulation tools to support the unique needs of AI startups. Of courseNgs name recognition and network should help ensure solid deal flow and enable Ng to poach and train talent for startups in need of scarce deep learning engineers.

Ive sent a note to Andrew and we will update this post if and when we get more details.

See more here:

Andrew Ng is raising a $150M AI Fund - TechCrunch

Posted in Ai | Comments Off on Andrew Ng is raising a $150M AI Fund – TechCrunch

US Sec. Mattis pushes military AI, experts warn of hijacked ‘killer robots’ – TechRepublic

Posted: at 6:18 pm

The Pentagon is lagging behind the tech industry when it comes to tapping artificial intelligence (AI) for national security, according to US defense secretary James Mattis. On a recent tour that included visits to Amazon and Google, Mattis spoke about his desire to better harness the technology for military purposes, according to a report from Wired.

"It's got to be better integrated by the Department of Defense, because I see many of the greatest advances out here on the West Coast in private industry," Mattis told Wired.

The tech sector has tapped AI for everything from data management to hiring to photography in recent years. The Defense Innovation Unit Experimental (DIUx), an organization founded in 2015 to work within the DoD, aims to make it easier for small tech companies to work with the DoD and the military. The unit has invested $100 million into 45 contracts, Wired noted, including those with companies developing autonomous drones that could investigate buildings during military raids, and a headset and microphone that can be mounted on a tooth.

Mattis told Wired that he hopes to see DIUx continue to gain expertise from the tech industry. "There's no doubt in my mind DIUx will continue to exist; it will grow in its influence on the Department of Defense," he said.

SEE: Defending against cyberwar: How the cybersecurity elite are working to prevent a digital apocalypse

However, in June, China announced plans to become a world leader in AI by 2030, investing heavily in the technology for its government, military, and companies to stay at the cutting edge and surpass their rivals. The US does not have a similar public, overarching strategy, Wired said. Further, the White House's budget proposal includes cuts to the National Science Foundation, which has long supported AI research.

A July report from Harvard's Belfer Center for Science and International Affairs, conducted on behalf of the director of the US Intelligence Advanced Research Projects Activity (IARPA), determined that "advances in machine learning and Artificial Intelligence (AI) represent a turning point in the use of automation in warfare," but that "many of the most transformative applications of AI have not yet been addressed."

And most AI research advances are occurring in the private sector and academia, with private sector funding dwarfing that of the US government, the report found.

Current AI capabilities could have a significant impact on national security, the report noted: For example, existing machine learning technology could allow for more automation in labor-intensive activities such as satellite imagery analysis and cyber defense.

Future progress in AI has the potential to transform national security technology, "on a par with nuclear weapons, aircraft, computers, and biotech," the Harvard report stated.

"The DoD needs to pursue AI solutions to stay competitive with its Chinese and Russian counterparts," said Roman Yampolskiy, director of the Cyber Security Laboratory at the University of Louisville. "Unfortunately, for the humanity that means development of killer robots, unsupervised drones and other mechanisms of killing people in an automated process. As we know all computer systems have bugs or can be hacked. What happens when our killer robots get hijacked by the enemy is something I am very concerned about."

At the enterprise level, 62% of security experts said they believe that AI will be weaponized and used for cyberattacks within the next 12 months, according to a recent survey from Cylance.

SEE: Special report: How to implement AI and machine learning

Machine learning in particular has seen some very important advances in recent years, as evidenced by work from tech giants such as Google and Amazon, including voice recognition, search correlation, and personalisation, according to Engin Kirda, professor of computer science at Northeastern University. This technology is also increasingly used in computer security applications, in distinguishing normal behavior from attack-related behavior, and detecting breaches, Kirda said.

"Seeing these advances, I think the Department of Defense is realizing the potential of machine learning (and AI in general), and is considering to invest more resources into catching up with some of the advances in consumer software," Kirda said. "That is a very smart thing to do, because it is clear that AI has great application potential for some of the application scenarios that the Department of Defense is interested in (e.g., anti-terror scenarios)."

From an IT standpoint, the DoD is the largest and most complex enterprise in the world, with over 10,000 networks and 4 million desktop computers, and millions of mobile computing devices, according to Bob Gourley, co-founder of the cyber security consultancy Cognitio and former CTO of the Defense Intelligence Agency. All of this IT exists to do one thing: Help execute the missions of national security.

"All DoD missions will always be human guided, but new AI approaches are already enhancing decision-making in military missions," Gourley said. "Machine learning algorithms are improving the ability of commanders to understand the environment and helping leaders assess best options. This will only improve."

However, leaders still lack the ability to choose the right AI for the right task, Gourley said. For example, thousands of models exist for search and discovery, and it is suboptimal to hard code a single algorithm into a solution. "Why not enable decision-makers to decide which code to use for the problem at hand?" Gourley said. "This will improve decision making and battlefield results."

Image: iStockphoto/ratpack223

Continue reading here:

US Sec. Mattis pushes military AI, experts warn of hijacked 'killer robots' - TechRepublic

Posted in Ai | Comments Off on US Sec. Mattis pushes military AI, experts warn of hijacked ‘killer robots’ – TechRepublic

Allen-backed AI2 incubator aims to connect AI startups with world-class talent – TechCrunch

Posted: August 15, 2017 at 12:17 pm

You cant swing a cat these days without hitting some incubator or accelerator, or a startup touting its artificial intelligence chops but for some reason, there are few if any incubators focused just on the AI sector. Seattles Allen Institute for AI is doing just that, with the promise of connecting small classes of startups with the organizations formidable brains (and 250 grand).

AI2, as the Paul Allen-backed nonprofit is more commonly called, already spun off two companies: XNOR.ai, which has made major advances in enabling AI tasks to run on edge devices, is operating independently and licensing its tech to eager customers. And Kitt.ai, a (profitable!) natural language processing platform, was bought by Baidu just last month.

Were two for two, and not in a small way, said Jacob Colker, who has led several Seattle and Bay Area startups and incubators, and is currently the Entrepreneur-in-Residence charged with putting AI2s program on the map. Until now the incubation program has kept a low profile.

Startups will get the expected mentorship and guidance on how to, you know, actually run a company but the draw, Colker emphasized, is the people. A good AI-based startup might get good advice and fancy office space from just about anyone but only AI2, he pointed out, is a major concentration of three core competencies in machine learning, natural language processing, and computer vision.

YOLO in action, from the paper presented at CVPR.

XNOR.ai, still partly run out of the AI2 office, is evidence of that. The companys latest computer vision system, YOLO, performs the rather incredible feat of both detecting and classifying hundreds of object types on the same network, locally and in real time. YOLO scored runner-up for Best Paper at this years CVPR, and thats not the first time its authors have been honored. Id spend more time on the system but its not what this article is about.

There are dozens more PhDs and published researchers; AI2 has plucked (or politely borrowed) high-profile academics from all over, but especially the University of Washington, a longstanding presence at the frontiers of tech. AI2 CEO Oren Etzioni is himself a veteran researcher and is clearly proud of the team hes built.

Obviously AI is hot right now, he told me, but were not jumping on the bandwagon here.

The incubator will have just a handful of companies at a time, he and Colker explained, and the potential investment of up to $250K is more than most such organizations are willing to part with. And as a nonprofit, there are fewer worries about equity terms and ROI.

But the applications of supervised learning are innumerable, and machine learning has become a standard developer tool so ambitious and unique applications of AI are encouraged.

Were not looking for a doohickey, Etzioni said. We want to make big bets and big companies.

AI2 is hoping to get just 2-5 companies for its first batch. Makes it a lot easier for me to keep eyes on them, thats for sure. Interested startups can apply at the AI2 site.

Here is the original post:

Allen-backed AI2 incubator aims to connect AI startups with world-class talent - TechCrunch

Posted in Ai | Comments Off on Allen-backed AI2 incubator aims to connect AI startups with world-class talent – TechCrunch

‘It knew what you were going to do next’: AI learns from pro gamers then crushes them – Washington Post

Posted: at 12:17 pm

For decades, the worlds smartest game-playing humans have been racking up losses to increasingly sophisticated forms of artificial intelligence.

The defeats began in the 1990s when Deep Blue conquered chess master Garry Kasparov. In May, Ke Jie until then the worlds best player of the ancient Chinese board game Go was defeated by a Google computer program.

Now the AIsupergamers havemoved intothe world of e-sports. Last week, an artificial intelligence bot created by the Elon Musk-backed start-up OpenAI defeated some of the worlds most talented players of Dota 2, a fast-paced, highly complex, multiplayer online video game that draws fierce competition from all over the globe.

[Billionaire burn: Musk says Zuckerbergs understanding of AI threat is limited]

OpenAI unveiled itsbot at an annual Dota 2 tournament where players walk away with millions in prize money.It was a pivotal moment in gaming and in AI research largely because of how the bot developed its skills and how long it took to refine them enough to defeat the worlds most talented pros, according to Greg Brockman, co-founder and chief technology officer of OpenAI.

The somewhat frightening reality: It only took the bot two weeks to go from laughable novice to world-class competitor, a period in which Brockman said the bot gathered lifetimes of experience by playing itself.

During that period, players said, the botwent from behaving like a bot to behaving in a way that felt more alive.

[New artificial intelligence promises to make travel a little smarter. Does it?]

Danylo Dendi Ishutin, one of the games top players, was defeated twice by his AI competition, whichfelt a little like human, but a little like something else, he said, according to the Verge.

Brockman agreed with that perspective:You kind of see that this thing is super fast and no human can execute its moves as well, but it was also strategic, and it kind of knows what youre going to do, he said. When you go off screen, for example, it would predict what you were going to do next. Thats not something we expected.

Brockman said games are a great testing ground for AI because they offer a defined set of rules with baked-in complexity that allow developers to measure a bots changing skill level.He said one of the major revelations of the Dota 2 bots success was that it was achieved via self-play a form of training in which the bot would continuously play against a copy of itself until it amassed more and more knowledge while improving incrementally.

[Was this created by a human or computer? See if you can tell the difference.]

For a game as complicated as Dota 2 which incorporates more than 100 playable roles and thousands of moves self play proved more organic and comprehensive than having a human preprogram the bots behavior.

If youre a novice playing against someone who is awesome playing tennis against Serena Williams, for example youre going to be crushed, and you wont realize there are slightly better techniques or ways of doing something, Brockman said. The magic happens when your opponent is exactly balanced with you so that if you explore and find a slightly better strategy it is then reflected in your performance in the game.

Tesla chief executive Elon Musk hailed the bots achievement in historic fashion on Twitter before going on to once again highlight the risk posed by AI, which he saidposes vastly more risk than North Korea.

Musk unleashed a debate about the danger of AI last month when he tweeted that Facebook chief executive Mark Zuckerbergs understanding of the threat posed by AI is limited.

MORE READING:

Can a better nights sleep in a hipster bus replace flying?

One recipe at a time, YouTubes Binging With Babish is disrupting the content industry

I spent three minutes inside Teslas Model 3 and Im still thinking about it a day later

Continue reading here:

'It knew what you were going to do next': AI learns from pro gamers then crushes them - Washington Post

Posted in Ai | Comments Off on ‘It knew what you were going to do next’: AI learns from pro gamers then crushes them – Washington Post

China’s Plan for World Domination in AI Isn’t So Crazy After All – Bloomberg

Posted: at 12:17 pm

Xu Lis software scans more faces than maybe any on earth. He has the Chinese police to thank.

Xu runs SenseTime Group Ltd., which makes artificial intelligence software that recognizes objects and faces, and counts Chinas biggest smartphone brands as customers. In July, SenseTime raised $410 million, a sum it said was the largest single round for an AI company to date. That feat may soon be topped, probably by another startup in China.

The nation is betting heavily on AI. Money is pouring in from Chinas investors, big internet companies and its government, driven by a belief that the technology can remake entire sectors of the economy, as well as national security. A similar effort is underway in the U.S., but in this new global arms race, China has three advantages: A vast pool of engineers to write the software, a massive base of 751 million internet users to test it on, and most importantlystaunch government support that includes handing over gobs of citizens data - something that makes Western officials squirm.

Data is key because thats how AI engineers train and test algorithms to adapt and learn new skills without human programmers intervening. SenseTime built its video analysis software using footage from the police force in Guangzhou, a southern city of 14 million. Most Chinese mega-cities have set up institutes for AI that include some data-sharing arrangements, according to Xu. "In China, the population is huge, so its much easier to collect the data for whatever use-scenarios you need," he said. "When we talk about data resources, really the largest data source is the government."

This flood of data will only rise. China just enshrined the pursuit of AI into a kind of national technology constitution. A state plan, issued in July, calls for the nation to become the leader in the industry by 2030. Five years from then, the government claims the AI industry will create 400 billion yuan ($59 billion) in economic activity. Chinas tech titans, particularly Tencent Holdings Ltd. and Baidu Inc., are getting on board. And the science is showing up in unexpected places: Shanghais courts are testing an AI system that scours criminal cases to judge the validity of evidence used by all sides, ostensibly to prevent wrongful prosecutions.

Data access has always been easier in China, but now people in government, organizations and companies have recognized the value of data, said Jiebo Luo,a computer science professor at the University of Rochester who has researched China. As long as they can find someone they trust, they are willing to share it.

The AI-MATHS machine took the math portion of Chinas annual university entrance exam in Chengdu.

Photographer: AFP via Getty Images

Every major U.S. tech company is investing deeply as well. Machine learning -- a type of AI that lets driverless cars see, chatbots speak and machines parse scores of financial information -- demands computers learn from raw data instead of hand-cranked programming. Getting access to that data is a permanent slog. Chinas command-and-control economy, and its thinner privacy concerns, mean that country can dispense video footage, medical records, banking information and other wells of data almost whenever it pleases.

Xu argued this is a global phenomenon. "Theres a trend toward making data more public. For example, NHS and Google recently shared some medical image data," he said. But that example does more to illustrate Chinas edge.

DeepMind, the AI lab of Googles Alphabet Inc., has labored for nearly two years to access medical records from the U.K.s National Health Service for a diagnostics app. The agency began a trial with the company using 1.6 million patient records. Last month, the top U.K. privacy watchdog declared the trial violates British data-protection laws, throwing its future into question.

Go player Lee Se-Dol, right, in a match against Googles AlphaGo, during the DeepMind Challenge Match in March 2016.

Photographer: Google via Getty Images

Contrast that with how officials handled a project in Fuzhou. Government leaders from that southeastern Chinese city of more than seven million people held an event on June 26. Venture capital firm Sequoia Capital helped organize the event, which included representatives from Dell Inc., International Business Machines Corp. and Lenovo Group Ltd.A spokeswoman for Dell characterized the event as the nations first "Healthcare and Medical Big Data Ecology Summit."

The summit involved a vast handover of data. At the press conference, city officials shared 80 exabytes worth of heart ultrasound videos, according to one company that participated. With the massive data set, some of the companies were tasked with building an AI tool that could identify heart disease, ideally at rates above medical experts. They were asked to turn it around by the fall.

"The Chinese AI market is moving fast because people are willing to take risks and adopt new technology more quickly in a fast-growing economy," said Chris Nicholson, co-founder of Skymind Inc., one of the companies involved in the event. "AI needs big data, and Chinese regulators are now on the side of making data accessible to accelerate AI."

Representatives from IBM and Lenovo declined to comment. Last month, Lenovo Chief Executive Officer Yang Yuanqing said he will invest $1 billion into AI research over the next three to four years.

Along with health, finance can be a lucrative business in China. In part, thats because the country has far less stringent privacy regulations and concerns than the West. For decades the government has kept a secret file on nearly everyone in China called a dangan. The records run the gamut from health reports and school marks to personality assessments and club records. This dossier can often decide a citizens future -- whether they can score a promotion or be allowed to reside in the city they work.

U.S. companies that partner in China stress that AI efforts, like those in Fuzhou, are for non-military purposes. Luo, the computer science professor, said most national security research efforts are relegated to select university partners. However, one stated goal of the governments national plan is for a greater integration of civilian, academic and military development of AI.

The government also revealed in 2015 that it was building a nationwide database that would score citizens on their trustworthiness, which in turn would feed into their credit ratings. Last year, China Premier Li Keqiang said 80 percent of the nations data was in public hands and would be opened to the public, with an unspecific pledge to protect privacy. The raging popularity of live video feeds -- where Chinese internet users spend hours watching daily footage caught by surveillance video -- shows the gulf in privacy concerns between the country and the West. Embraced in China, the security cameras also reel in mountains of valuable data.

Some machine-learning researchers dispel the idea that data can be a panacea. Advanced AI operations, like DeepMind, often rely on "simulated" data, co-founder Demis Hassabis explained during a trip to China in May. DeepMind has used Atari video games to train its systems. Engineers building self-driving car software frequently test it this way, simulating stretches of highway or crashes virtually.

"Sure, there might be data sets you could get access to in China that you couldnt in the U.S.," said Oren Etzioni, director of the Allen Institute for Artificial Intelligence. "But that does not put them in a terrific position vis-a-vis AI. Its still a question of the algorithm, the insights and the research."

Historically, the country has been a lightweight in those regards. Its suffered through a "brain drain," a flight of academics and specialists out of the country. "China currently has a talent shortage when it comes to top tier AI experts," said Connie Chan, a partner at venture capital firm Andreessen Horowitz. "While there have been more deep learning papers published in China than the U.S. since 2016, those papers have not been as influential as those from the U.S. and U.K."

Exclusive insights on technology around the world.

Get Fully Charged, from Bloomberg Technology.

But China is gaining ground. The country is producing more top engineers, who craft AI algorithms for U.S. companies and, increasingly, Chinese ones. Chinese universities and private firms are actively wooing AI researchers from across the globe. Juo, the University of Rochester professor, said top researchers can get offers of $500,000 or more in annual compensation from U.S. tech companies, while Chinese companies will often double that.

Meanwhile, Chinas homegrown talent is starting to shine. A popular benchmark in AI research is the ImageNet competition, an annual challenge to devise a visual recognition system with the lowest error rate. Like last year, this years top winners were dominated by researchers from China, including a team from the Ministry of Public Securitys Third Research Institute.

Relentless pollution in metropolises like Beijing and Shanghai has hurt Chinese companies ability to nab top tech talent. In response, some are opening shop in Silicon Valley. Tencent recently set up an AI research lab in Seattle.

Photographer: David Paul Morris/Bloomberg

Baidu managed to pull a marquee name from that city. The firm recruited Qi Lu, one of Microsofts top executives, to return to China to lead the search giants push into AI. He touted the technologys potential for enhancing Chinas "national strength" and cited a figure that nearly half of the bountiful academic research on the subject globally has ethnically Chinese authors, using the Mandarin term "huaren" -- a term for ethnic Chinese that echoes government rhetoric.

"China has structural advantages, because China can acquire more and better data to power AI development," Lu told the cheering crowd of Chinese developers. "We must have the chance to lead the world!"

Go here to see the original:

China's Plan for World Domination in AI Isn't So Crazy After All - Bloomberg

Posted in Ai | Comments Off on China’s Plan for World Domination in AI Isn’t So Crazy After All – Bloomberg

Did Elon Musk’s AI champ destroy humans at video games? It’s complicated – The Verge

Posted: at 12:17 pm

You might not have noticed, but over the weekend a little coup took place. On Friday night, in front of a crowd of thousands, an AI bot beat a professional human player at Dota 2 one of the worlds most popular video games. The human champ, the affable Danil "Dendi" Ishutin, threw in the towel after being killed three times, saying he couldnt beat the unstoppable bot. It feels a little bit like human, said Dendi. But at the same time, its something else.

The bots patron was none other than tech billionaire Elon Musk, who helped found and fund the institution that designed it, OpenAI. Musk wasnt present, but made his feelings known on Twitter, saying: OpenAI first ever to defeat world's best players in competitive eSports. Vastly more complex than traditional board games like chess & Go. Even more exciting, said OpenAI, was that the AI had taught itself everything it knew. It learned purely by playing successive versions of itself, amassing lifetimes of in-game experience over the course of just two weeks.

But how big a deal is all this? Was Friday nights showdown really more impressive than Googles AI victories at the board game Go? The short answer is probably not, but it still represents a significant step forward both for the world of e-sports and the world of artificial intelligence.

First, we need to look at Musks claim that Dota is vastly more complex than traditional board games like chess & Go. This is completely true. Real-time battle and strategy games like Dota and Starcraft II pose major challenges that computers just cant handle yet. Not only do these games demand long-term strategic thinking, but unlike board games they keep vital information hidden from players. You can see everything thats happening on a chess board, but you cant in a video game. This means you have to predict and preempt what your opponent will do. It takes imagination and intuition.

In Dota, this complexity is increased as human players are asked to work together in teams of five, coordinating strategies that will change on the fly based on which characters players choose. To make things even more complex, there are more than 100 different characters in-game, each with their own unique skill set; and characters can be equipped with a number of unique items, each of which can be game-winning if deployed at the right moment. All this means its basically impossible to comprehensively program winning strategies into a Dota bot.

But, the game that OpenAIs bot played was nowhere near as complex as all this. Instead of 5v5, it took on humans at 1v1; and instead of choosing a character, both human and computer were limited to the same hero a fellow named the Shadow Fiend, who has a pretty straightforward set of attacks. My colleague Vlad Savov, a confirmed Dota addict who also wrote up his thoughts on Fridays match, said the 1v1 match represents only a fraction of the complexity of the full team contest. So: probably not as complex as Go.

The second major caveat is knowing what advantages OpenAIs agent had over its human opponents. One of the major points of discussion in the AI community was whether or not the bot had access to Dotas bot API which would let it tap directly into streams of information from the game, like the distances between players. OpenAIs Greg Brockman confirmed to The Verge that the AI did indeed use the API, and that certain techniques were hardcoded in the agent, including the items it should use in the game. It was also taught certain strategies (like one called creep block) using a trial-and-error technique known as reinforcement learning. Basically, it did get a little coaching.

Andreas Theodorou, a games AI researcher at the University of Bath and an experienced Dota player, explains why this makes a difference. One of the main things in Dota is that you need to calculate distances to know how far some [attacks] travel, he says. The API allows bots to have specific indications of range. So you can say, If someone is in 500 meters range, do that, but the human player has to calculate it themselves, learning through trial and error. It really gives them an advantage if they have access to information that a human player does not. This is particularly true in a 1v1 setting with a hero like Shadow Fiend; where players have to focus on timing their attacks correctly, rather than overall strategy.

Brockmans response is that this sort of skill is trivial for an AI to learn, and was never the focus of OpenAIs research. He says the institutes bot could have done without information from the API, but youd just be spending a lot more of your time learning to do vision, which we already know works, so whats the benefit?

So, knowing all this, should we dismiss OpenAIs victory? Not at all, says Brockman. He points out that, perhaps more important than the bots victory, was how it taught itself in the first place. While previous AI champions like AlphaGo have learned how to play games by soaking up past matches by human champions, OpenAIs bot taught itself (nearly) everything it knows.

You have this system that has just played against itself, and it has learned robust enough strategies to beat the top pros. Thats not something you should take for granted, says Brockman. And its a big question for any machine learning system: how does complexity get into the model? Where does it come from?

As OpenAIs Dota bot shows, he says, we dont have to teach computers complexity: they can learn it themselves. And although some of the bots behavior was preprogrammed, it did develop some strategies by itself. For example, it learned how to fake out its opponents by pretending to trigger an attack, only to cancel at the last second, leaving the human player to dodge an attack that never comes exactly like a feint in boxing.

Others, though, are still a little skeptical. AI researcher Denny Britz, who wrote a popular blog post that put the victory in context, tells The Verge that its difficult to judge the scale of this achievement without knowing more technical details. (Brockman says these are forthcoming, but couldnt give an exact time frame.) Its not clear what the technical contribution is at this point before the paper comes out, says Britz.

Theodorou points out that although OpenAIs bot beat Dendi onstage, once players got a good look at its tactics, they were able to outwit it. If you look at the strategies they used, they played outside the box a bit and they won, he says. The players used offbeat strategies the sort that wouldnt faze a human opponent, but which the AI had never seen before. It didnt look like the bot was flexible enough, says Theodorou. (Brockman counters that once the bot learned these strategies, it wouldnt fall for them twice.)

All the experts agree that this was a major achievement, but that the real challenge is yet to come. That will be a 5v5 match, where OpenAIs agents have to manage not just a duel in the middle of the map, but a sprawling, chaotic battlefield, with multiple heroes, dozens of support units, and unexpected twists. Brockman says that OpenAI is currently targeting next years grand Dota tournament in 12 months time to pull this off. Between now and then, theres much more training to be done.

See more here:

Did Elon Musk's AI champ destroy humans at video games? It's complicated - The Verge

Posted in Ai | Comments Off on Did Elon Musk’s AI champ destroy humans at video games? It’s complicated – The Verge

How AI could make living in cities much less miserable – MarketWatch

Posted: at 12:17 pm

By 2021, your Lyft ride will likely have no driver. Here's how Posted August 9, 2017 How AI could make living in cities much less miserable Posted August 15, 2017 Here's how to pick retail companies that will survive the meltdown Posted August 3, 2017 Here's how new tech could democratize education Posted August 10, 2017 Why 56 million Americans have no bank account: Not what you think Posted June 20, 2017 Silicon Valley's corporate-campus building boom is a cautionary tale Posted June 22, 2017 What Trump's vow to repeal Dodd-Frank means for banks Posted July 26, 2017 What 'Flash Boys' Brad Katsuyama thinks is killing Wall Street trade Posted July 24, 2017 Before NASA can send humans to Mars, it needs to solve these problems Posted July 18, 2017 What every cannabis investor should be paranoid about Posted August 2, 2017 This could be the answer to rising corporate burnout rates Posted August 1, 2017 What marriage-phobic millennials mean for the wedding-ring industry Posted July 13, 2017 How the rise of drones is posing a major security nightmare Posted July 11, 2017 The new frontier for vision companies: Colorblindness Posted July 7, 2017 How you teach a computer to drive like a human Posted July 5, 2017 The next frontier in entertainment: Drone sports Posted June 29, 2017 Here's what a salad looks like on Mars Posted June 27, 2017 Alan Alda: Why you should trust science even if you're a skeptic Posted June 15, 2017 How 3D full-body scans will change everything from fitness to fashion Posted June 13, 2017 JetBlue chairman: Why loyalty programs have made airlines 'lazy' Posted June 8, 2017

Read more:

How AI could make living in cities much less miserable - MarketWatch

Posted in Ai | Comments Off on How AI could make living in cities much less miserable – MarketWatch

How AI Is Creating Building Blocks to Reshape Music and Art – New York Times

Posted: August 14, 2017 at 12:16 pm

As Mr. Eck says, these systems are at least approaching the point still many, many years away when a machine can instantly build a new Beatles song or perhaps trillions of new Beatles songs, each sounding a lot like the music the Beatles themselves recorded, but also a little different. But that end game as much a way of undermining art as creating it is not what he is after. There are so many other paths to explore beyond mere mimicry. The ultimate idea is not to replace artists but to give them tools that allow them to create in entirely new ways.

In the 1990s, at that juke joint in New Mexico, Mr. Eck combined Johnny Rotten and Johnny Cash. Now, he is building software that does much the same thing. Using neural networks, he and his team are crossbreeding sounds from very different instruments say, a bassoon and a clavichord creating instruments capable of producing sounds no one has ever heard.

Much as a neural network can learn to identify a cat by analyzing hundreds of cat photos, it can learn the musical characteristics of a bassoon by analyzing hundreds of notes. It creates a mathematical representation, or vector, that identifies a bassoon. So, Mr. Eck and his team have fed notes from hundreds of instruments into a neural network, building a vector for each one. Now, simply by moving a button across a screen, they can combine these vectors to create new instruments. One may be 47 percent bassoon and 53 percent clavichord. Another might switch the percentages. And so on.

For centuries, orchestral conductors have layered sounds from various instruments atop one other. But this is different. Rather than layering sounds, Mr. Eck and his team are combining them to form something that didnt exist before, creating new ways that artists can work. Were making the next film camera, Mr. Eck said. Were making the next electric guitar.

Called NSynth, this particular project is only just getting off the ground. But across the worlds of both art and technology, many are already developing an appetite for building new art through neural networks and other A.I. techniques. This work has exploded over the last few years, said Adam Ferris, a photographer and artist in Los Angeles. This is a totally new aesthetic.

In 2015, a separate team of researchers inside Google created DeepDream, a tool that uses neural networks to generate haunting, hallucinogenic imagescapes from existing photography, and this has spawned new art inside Google and out. If the tool analyzes a photo of a dog and finds a bit of fur that looks vaguely like an eyeball, it will enhance that bit of fur and then repeat the process. The result is a dog covered in swirling eyeballs.

At the same time, a number of artists like the well-known multimedia performance artist Trevor Paglen or the lesser-known Adam Ferris are exploring neural networks in other ways. In January, Mr. Paglen gave a performance in an old maritime warehouse in San Francisco that explored the ethics of computer vision through neural networks that can track the way we look and move. While members of the avant-garde Kronos Quartet played onstage, for example, neural networks analyzed their expressions in real time, guessing at their emotions.

The tools are new, but the attitude is not. Allison Parrish, a New York University professor who builds software that generates poetry, points out that artists have been using computers to generate art since the 1950s. Much like as Jackson Pollock figured out a new way to paint by just opening the paint can and splashing it on the canvas beneath him, she said, these new computational techniques create a broader palette for artists.

A year ago, David Ha was a trader with Goldman Sachs in Tokyo. During his lunch breaks he started toying with neural networks and posting the results to a blog under a pseudonym. Among other things, he built a neural network that learned to write its own Kanji, the logographic Chinese characters that are not so much written as drawn.

Soon, Mr. Eck and other Googlers spotted the blog, and now Mr. Ha is a researcher with Google Magenta. Through a project called SketchRNN, he is building neural networks that can draw. By analyzing thousands of digital sketches made by ordinary people, these neural networks can learn to make images of things like pigs, trucks, boats or yoga poses. They dont copy what people have drawn. They learn to draw on their own, to mathematically identify what a pig drawing looks like.

Then, you ask them to, say, draw a pig with a cats head, or to visually subtract a foot from a horse or sketch a truck that looks like a dog or build a boat from a few random squiggly lines. Next to NSynth or DeepDream, these may seem less like tools that artists will use to build new works. But if you play with them, you realize that they are themselves art, living works built by Mr. Ha. A.I. isnt just creating new kinds of art; its creating new kinds of artists.

Read more:

How AI Is Creating Building Blocks to Reshape Music and Art - New York Times

Posted in Ai | Comments Off on How AI Is Creating Building Blocks to Reshape Music and Art – New York Times

Teaching AI Systems to Behave Themselves – New York Times

Posted: at 12:16 pm

In some cases, researchers are working to ensure that systems dont make mistakes on their own, as the Coast Runners boat did. Theyre also working to ensure that hackers and other bad actors cant exploit hidden holes in these systems. Researchers like Googles Ian Goodfellow, for example, are exploring ways that hackers could fool A.I. systems into seeing things that arent there.

Modern computer vision is based on what are called deep neural networks, which are pattern-recognition systems that can learn tasks by analyzing vast amounts of data. By analyzing thousands of dog photos, a neural network can learn to recognize a dog. This is how Facebook identifies faces in snapshots, and its how Google instantly searches for images inside its Photos app.

But Mr. Goodfellow and others have shown that hackers can alter images so that a neural network will believe they include things that arent really there. Just by changing a few pixels in the photo of elephant, for example, they could fool the neural network into thinking it depicts a car.

That becomes problematic when neural networks are used in security cameras. Simply by making a few marks on your face, the researchers said, you could fool a camera into believing youre someone else.

If you train an object-recognition system on a million images labeled by humans, you can still create new images where a human and the machine disagree 100 percent of the time, Mr. Goodfellow said. We need to understand that phenomenon.

Another big worry is that A.I. systems will learn to prevent humans from turning them off. If the machine is designed to chase a reward, the thinking goes, it may find that it can chase that reward only if it stays on. This oft-described threat is much further off, but researchers are already working to address it.

Mr. Hadfield-Menell and others at U.C. Berkeley recently published a paper that takes a mathematical approach to the problem. A machine will seek to preserve its off switch, they showed, if it is specifically designed to be uncertain about its reward function. This gives it an incentive to accept or even seek out human oversight.

Much of this work is still theoretical. But given the rapid progress of A.I. techniques and their growing importance across so many industries, researchers believe that starting early is the best policy.

Theres a lot of uncertainty around exactly how rapid progress in A.I. is going to be, said Shane Legg, who oversees the A.I. safety work at DeepMind. The responsible approach is to try to understand different ways in which these technologies can be misused, different ways they can fail and different ways of dealing with these issues.

An earlier version of a picture caption with this article identified the three people in the picture in the wrong order. They are Dario Amodei, standing, and from left, Paul Christiano and Geoffrey Irving.

A version of this article appears in print on August 14, 2017, on Page B1 of the New York edition with the headline: When Robots Have Minds Of Their Own.

Here is the original post:

Teaching AI Systems to Behave Themselves - New York Times

Posted in Ai | Comments Off on Teaching AI Systems to Behave Themselves – New York Times

MIT’s AI streaming software aims to stop those video stutters – TechCrunch

Posted: at 12:16 pm

MITs Computer Science and Artificial Intelligence Lab (CSAIL) wants to ensure your streaming video experience stays smooth. A research team led by MIT professor Mohammad Alizadeh has developed an artificial intelligence (dubbed Pensieve) that can select the best algorithms for ensuring video streams both without interruption, and at the best possible playback quality.

The method improves upon existing tech, including the adaptive bitrate (ABR) method used by YouTube that throttles back quality to keep videos playing, albeit with pixelation and other artifacts. The AI can select different algorithms depending on what kind of network conditions a device is experiencing, cutting down on the downsides associated with any one method.

During experimentation, the CSAIL research team behind this method found that video streamed with between 10 and 30 percent less rebuffing, with 10 to 25 percent improved quality. Those gains would definitely add up to a significantly improved experience for most video viewers, especially over a long period.

The difference in CSAILs Pensieve approach vs. traditional methods is mainly in its use of a neural network instead of sticking to a strictly algorithmic-based approach. The neural net learns how to optimize through a reward system that incentivizes smoother video playback, rather than setting out defined rules about what algorithmic techniques to use when buffering video.

Researchers say the system is also potentially tweakable on the user end, depending on what they want to prioritize in playback: You could, for instance, set Pensieve to optimize for playback quality, or conversely, for playback speed, or even for conservation of data.

The team is making their project code open source for Pensieve at SIGCOMM next week in LA, and they expect that when trained on a larger data set, it could provide even greater improvements in terms of performance and quality. Theyre also now going to test applying it to VR video, since the high bitrates required for a quality experience there are well suited to the kinds of improvements Pensieve can offer.

Visit link:

MIT's AI streaming software aims to stop those video stutters - TechCrunch

Posted in Ai | Comments Off on MIT’s AI streaming software aims to stop those video stutters – TechCrunch

Page 208«..1020..207208209210..220230..»