Page 276«..1020..275276277278..»

Category Archives: Ai

Here’s How To Stay Ahead Of Machines And AI – Forbes – Forbes

Posted: February 23, 2017 at 1:15 pm


Forbes
Here's How To Stay Ahead Of Machines And AI - Forbes
Forbes
The Great American Jobs Apocalypse continues to tear our social fabric, and no one really knows what to do about it. If you're not unemployed or ...

and more »

View post:

Here's How To Stay Ahead Of Machines And AI - Forbes - Forbes

Posted in Ai | Comments Off on Here’s How To Stay Ahead Of Machines And AI – Forbes – Forbes

AI learns to write its own code by stealing from other programs – New Scientist

Posted: at 1:15 pm

Set a machine to program a machine

iunewind/Alamy Stock Photo

By Matt Reynolds

OUT of the way, human, Ive got this covered. A machine learning system has gained the ability to write its own code.

Created by researchers at Microsoft and the University of Cambridge, the system, called DeepCoder, solved basic challenges of the kind set by programming competitions. This kind of approach could make it much easier for people to build simple programs without knowing how to write code.

All of a sudden people could be so much more productive, says Armando Solar-Lezama at the Massachusetts Institute of Technology, who was not involved in the work. They could build systems that it [would be] impossible to build before.

Ultimately, the approach could allow non-coders to simply describe an idea for a program and let the system build it, says Marc Brockschmidt, one of DeepCoders creators at Microsoft Research in Cambridge, UK.

DeepCoder uses a technique called program synthesis: creating new programs by piecing together lines of code taken from existing software just like a programmer might. Given a list of inputs and outputs for each code fragment, DeepCoder learned which pieces of code were needed to achieve the desired result overall.

It could allow non-coders to simply describe an idea for a program and let the system build it

One advantage of letting an AI loose in this way is that it can search more thoroughly and widely than a human coder, so could piece together source code in a way humans may not have thought of. Whats more, DeepCoder uses machine learning to scour databases of source code and sort the fragments according to its view of their probable usefulness.

All this makes the system much faster than its predecessors. DeepCoder created working programs in fractions of a second, whereas older systems take minutes to trial many different combinations of lines of code before piecing together something that can do the job. And because DeepCoder learns which combinations of source code work and which ones dont as it goes along, it improves every time it tries a new problem.

The technology could have many applications. In 2015, researchers at MIT created a program that automatically fixed software bugs by replacing faulty lines of code with working lines from other programs. Brockschmidt says that future versions could make it very easy to build routine programs that scrape information from websites, or automatically categorise Facebook photos, for example, without human coders having to lift a finger

The potential for automation that this kind of technology offers could really signify an enormous [reduction] in the amount of effort it takes to develop code, says Solar-Lezama.

But he doesnt think these systems will put programmers out of a job. With program synthesis automating some of the most tedious parts of programming, he says, coders will be able to devote their time to more sophisticated work.

At the moment, DeepCoder is only capable of solving programming challenges that involve around five lines of code. But in the right coding language, a few lines are all thats needed for fairly complicated programs.

Generating a really big piece of code in one shot is hard, and potentially unrealistic, says Solar-Lezama. But really big pieces of code are built by putting together lots of little pieces of code.

This article appeared in print under the headline Computers are learning to code for themselves

More on these topics:

The rest is here:

AI learns to write its own code by stealing from other programs - New Scientist

Posted in Ai | Comments Off on AI learns to write its own code by stealing from other programs – New Scientist

Exyn unveils AI to help drones fly autonomously, even indoors or off the grid – TechCrunch

Posted: February 22, 2017 at 4:14 am

A startup called Exyn Technologies Inc. today revealed AI software that enables drones to fly autonomously, even in dark, obstacle-filled environments or beyond the reaches of GPS. A spin-out of the University of Pennsylvanias GRASP Labs, Exyn uses sensor fusion to give drones situational awareness much like a humans.

In a demo video shared by the company with TechCrunch, a drone using Exyns AI can be seen waking up and taking in its surroundings. It then navigates from a launch point in a populated office to the nearest identified exit without human intervention. The route is not pre-programmed, and pilots did not manipulate controls to influence the path that the drone takes. They simply tell it to find and go to the nearest door.

According to Exyn founderVijay Kumar, a veteran roboticist and dean of Penns School of Engineering, Artificial intelligence that lets drones understand their environment is an order of magnitude more complex than for self-driving cars or ground-based robots.

Thats because the world that drones inhabit is inherently 3D. They have to do more than obey traffic laws and avoid pedestrians and trees. They must maneuver over and around obstacles in un-mapped skies where internet connectivity is not consistently available. Additionally, Kumar said, With drones you actually have to lift and fly with your payload and sensors. Cars roll along on wheels and can carry large batteries. But drones must preserve all the power they can for flight.

The AI that Exyn is adapting from Kumars original research will work with any type of unmanned aerial vehicle, from popular DJI models to more niche research and industrial UAVs. Exyn Chief Engineer Jason Derenick described how the technology basically works: We fuse multiple sensors from different parts of the spectrum to let a drone build a 3D map in real time. We only give the drone a relative goal and start location. But it takes off, updates its map and then goes through a process of planning and re-planning until it achieves that goal.

Keeping the technology self-contained on the drone means Exyn-powered UAVS dont rely on outside infrastructure, or human pilots to complete a mission. Going forward, the company can integrate data from cloud-based sources.

Exyn, which is backed by IP Group, faces competition from other startups like Iris Automation or Area 17 in Silicon Valley, as well as companies building drones with proprietary autonomous-flight software, like Skydio in Menlo Park, or Israel-based Airobotics.

The startups CEO and chairman Nader Elm is hoping Exyns AI will yield new uses for drones, and put drones in places where its not safe or easy for humans to work.

For example, the CEO said, the companys technology could allow drones to count inventory in warehouses filled with towering pallets and robots moving across the ground; or to work in dark mine shafts and unfinished buildings that require frequent inspections for safety and to measure worker productivity.

Looking forward, Exyns CEO said, Well continue advancing the technology to first of all make it more robust and hardened for commercial use while adding features and functionality. Ultimately we want to move from one drone to multiple, collaborating drones that can work on a common mission. We have focused on obstacle avoidance, but were also thinking about how drones can interact with various things in their environment.

See the article here:

Exyn unveils AI to help drones fly autonomously, even indoors or off the grid - TechCrunch

Posted in Ai | Comments Off on Exyn unveils AI to help drones fly autonomously, even indoors or off the grid – TechCrunch

What our original drama The Intelligence Explosion tells us about AI – The Guardian

Posted: at 4:14 am

The Intelligence Explosion, an original drama published by the Guardian, is obviously a work of fiction. But the fears behind it are very real, and have led some of the biggest brains in artificial intelligence (AI) to reconsider how they work.

The film dramatises a near-future conversation between the developers of an artificial general intelligence named Gnther and an ethical philosopher. Gnther himself (itself?) sits in, making fairly cringeworthy jokes and generally missing the point. Until, suddenly, he doesnt.

It shows an event which has come to be known in the technology world as the singularity: the moment when an artificial intelligence that has the ability to improve itself starts doing so at exponential speeds. The crucial moment is the period when AI becomes better at developing AI than people are. Up until that point, AI capability can only improve as quickly as AI research progresses, but once AI is involved in its own creation, a feedback loop begins. AI makes better AI, which is even better at making even better AI.

It may not end with a robot bursting into a cloud of stars and deciding to ascend to a higher plane of existence but its not far off. A super-intelligent AI could be so much more intelligent than a human being that we cant even comprehend its actual abilities, as futile as explaining to an ant how wireless data transfer works.

So one big question for AI researchers is whether this event will be good or bad for humanity. And thats where the ethical philosophy comes into it.

Dr Nick Bostrom, a philosopher at the University of Oxford, presented one of the most popular explanations of the problem in his book Superintelligence. Suppose you create an artificial intelligence designed to do one thing in his example, running a factory for making paperclips. In a bid for efficiency, however, you decide to programme the artificial intelligence with another set of instructions as well, commanding it to improve its own processes to become better at making paperclips.

For a while, everything goes well: the AI chugs along making paperclips, occasionally suggesting that a piece of machinery be moved, or designing a new alloy for the smelter to produce. Sometimes it even improves its own programming, with the rationale that the smarter it is, the better it can think of new ways to make paperclips.

But one day, the exponential increase happens: the paperclip factory starts getting very smart, very quickly. One day its a basic AI, the next its as intelligent as a person. The day after that, its as smart as all of humanity combined, and the day after that, its smarter than anything we can imagine.

Unfortunately, despite all of this, its main directive is unchanged: it just wants to make paperclips. As many as possible, as efficiently as possible. It would start strip-mining the Earth for the raw materials, except its already realised that doing that would probably spark resistance from the pesky humans who live on the planet. So, pre-emptively, it kills them all, leaving nothing standing between it and a lot of paperclips.

Thats the worst possible outcome. But obviously having an extremely smart AI on the side of humanity would be a pretty good thing. So one way to square the circle is by teaching ethics to artificial intelligences, before its too late.

In that scenario, the paperclip machine would be told make more paperclips, but only if its ethical to do so. That way, it probably wont murder humanity, which most people consider a positive outcome.

The downside is that to code that into an AI, you sort of need to solve the entirety of ethics and write it in computer-readable format. Which is, to say the least, tricky.

Ethical philosophers cant even agree on what the best ethical system is for people. Is it ethical to kill one person to save five? Or to lie when a madman with an axe asks where your neighbour is? Some of the best minds in the world of moral philosophy disagree over those questions, which doesnt bode well for the prospect of coding morality into an AI.

Problems like this are why the biggest AI companies in the world are paying keen attention to questions of ethics. DeepMind, the Google subsidiary which produced the first ever AI able to beat a human pro at the ancient boardgame Go, has a shadowy ethics and safety board, for instance. The company hasnt said whos on it, or even whether its met, but early investors say that its creation was a key part of why Googles bid to acquire DeepMind was successful. Other companies, including IBM, Amazon and Apple, have also joined forces, forming the Partnership on AI, to lead from the top.

For now, though, the singularity still exists only in the world of science fiction. All we can say for certain is that when it does come, it probably wont have Gnthers friendly attitude front and centre.

Read the rest here:

What our original drama The Intelligence Explosion tells us about AI - The Guardian

Posted in Ai | Comments Off on What our original drama The Intelligence Explosion tells us about AI – The Guardian

How AI fights the war against fake news – Fox News

Posted: at 4:14 am

A three-headed alien is wandering around Central Park right now. If you believe that, you might be susceptible to a fake news story. Artificial Intelligence technology, however, could be a vital weapon in the war on fake news, according to cybersecurity companies.

Popular during the last election but still prevalent on Facebook and other social media channels, fake news stories make wild claims, tend to exist only on a handful of minor news sites, and can be difficult to verify.

Yet, artificial intelligence could help us all weed out the good from the bad.

Experts tell Fox News that machine learning, natural language processing, semantic identification, and other techniques could at least provide a clue about authenticity.

NEW $27 MILLION FUND AIMS TO SAVE HUMANITY FROM DESTRUCTIVE AI

Catherine Lu, a product manager at fraud detection company DataVisor, says AI could detect the semantic meaning behind a web article. Heres one example. With the three-headed alien, a natural language processing (or NLP) engine could look at the headline, the subject of the story, the geo-location, and the main body text. An AI could determine if other sites are reporting the same facts. And the AI could weigh the facts against established media sources.

The New York Times is probably a more reputable of a source than an unknown, poorly designed website, Lu told Fox News. A machine learning model can be trained to predict the reputation of a web site, taking into account features such as the Alexa web rank and the domain name (for example, a .com domain is less suspicious than a .web domain).

Ertunga Arsal, the CEO of German cybersecurity company ESNC, tells Fox News that an AI has an advantage in detecting fake news because of the extremely large data set -- billions of websites all over the world. Also, the purveyors of fake news are fairly predictable.

One example he mentioned is that many of the fake news sites register for a Google AdSense account (using terms like election), then start posting the fake news. (Since once of the primary goals is to get people to click and then collect the ad revenue.)

WHITE HOUSE: WE'RE RESEARCHING AI, BUT DONT WORRY ABOUT KILLER ROBOTS

An AI could use keyword analytics in discovering and flagging sensational words often used in fake news headlines, he said, noting that there will only be an increase in the number of fake news stories, similar to the rise of spam, and the time is now to do something about it.

Dr. Pradeep Atrey from the University at Albany has already conducted research on semantic processing to detect the authenticity of news sites. He tells Fox News a similar approach could be used to detect fake news. For example, an algorithm could rate sites based on a reward and punishment system. Less popular sites would be rated as less trustworthy.

There are methods that can be used to at least minimize, if not fully eradicate, fake news instances, he says. It depends on how and up to what extent we use such methods in practice.

Unfortunately, according to Dr. Atrey, many people dont take the extra step to verify the authenticity of news sites to determine trustworthiness. An AI could identify a site as fake and pop up a warning to proceed with caution, similar to how malware detection works.

REALDOLL BUILDS ARTIFICIALLY INTELLIGENT SEX ROBOTS WITH PROGRAMMABLE PERSONALITIES

Not everyone is on board with using an AI to detect fake news, however.

Paul Shomo, a Senior Technical Manager at security firm Guidance Software, tells Fox News that fake news producers could figure out how to get around the AI algorithms. He says its a little scary to think an AI might mislabel a real news story as fake (known as a false positive).

Book author Darren Campo from the NYU Stern School of Business says fake news is primarily about an emotional response. He says people wont care if an AI has identified news as fake. What they often care about is whether the news matches up with their own worldview.

Fake news protects itself by embedding a fact in terms that can be defended, he tells Fox News. While artificial intelligence can identify a fact as incorrect, the AI cannot comprehend the context in which people enjoy believing a lie.

Thats at least good news for the three-headed alien.

Read the original here:

How AI fights the war against fake news - Fox News

Posted in Ai | Comments Off on How AI fights the war against fake news – Fox News

An AI Hedge Fund Created a New Currency to Make Wall Street … – WIRED

Posted: at 4:14 am

Slide: 1 / of 2. Caption: Numerai

Slide: 2 / of 2. Caption: Caption: Richard CraibNumerai

Wall Street is a competition, a Darwinian battle for the almighty dollar. Gordon Gekko said that greed is good, that it captures the essence of the evolutionary spirit. A hedge fund hunts for an edge and then maniacally guards it, locking down its trading data and barring its traders from joining the company next door. The big bucks lie in finding market inefficiencies no one else can, succeeding at the expense of others. But Richard Craib wants to change that. He wants to transform Wall Street from a cutthroat competition into a harmonious collaboration.

This morning, the 29-year-old South African technologist and his unorthodox hedge fund, Numerai, started issuing a new digital currencykind of. Craibs idea is so weird, so unlike anything else that has preceded it, that naming it becomes an exercise in approximation. Inspired by the same tech that underpins bitcoin, his creation joins a growing wave of what people in the world of crypto-finance call digital tokens, internet-based assets that enable the crowdsourcing of everything from venture capital to computing power. Craib hopes his particular token can turn Wall Street into a place where everyones on the same team. Its a strange, complicated, and potentially powerful creation that builds on an already audacious arrangement, a new configuration of technology and money that calls into question the markets most cherished premise. Greed is still good, but its better when people are working together.

Based in San Francisco, Numerai is a hedge fund in which an artificially intelligent system chooses all the trades. But its not a system Craib built alone. Instead, several thousand anonymous data scientists compete to create the best trading algorithmsand win bitcoin for their efforts. The whole concept may sound like a bad Silicon Valley joke. But Numerai has been making trades in this way for more than a year, and Craib says its making money. Its also attracted marquee backers like Howard Morgan, a founder of Renaissance Technologies, the wildly successful hedge fund that pioneered an earlier iteration of tech-powered trading.

The system is elegant in its way: Numerai encrypts its trading data before sharing it with the data scientists to prevent them from mimicking the funds trades themselves. At the same time, the company carefully organizes this encrypted data in a way that allows the data scientists to build models that are potentially able to make better trades. The crowdsourced approach seems to be workingto a point. But in Craibs eyes, the system still suffers from a major drawback: If the best scientist wins, that scientist has little incentive to get other talented colleagues involved. The wisdom of the crowd runs up against Wall Streets core ethos of self-interest: make the most money for yourself.

Thats where Craibs new token comes in. Craib and company believe Numerai can become even more successful if it can align the incentives of everyone involved. They hope its new kind of currency, Numeraire, will turn its online competition into a collaborationand turn Wall Street on its head in the process.

In its first incarnation, Numerai was flawed in a notable way. The company doled out bitcoin based on models that performed successfully on the encrypted test data before the fund ever tested them on the live market. That setup encouraged the scientists to game the system, to look out for themselves rather that the fund as a whole. It judged based on what happened in the past, not on what will happen in the future, says Fred Ehrsam, co-founder of marquee bitcoin company Coinbase and a Wall Street veteran.

But Craib feels the system was flawed in another waythe same way all of Wall Street is flawed. The data scientists were still in competition. They were fighting each other rather than fighting for the same goal. It was in their best interest to keep the winnings to themselves. If they spread the word, the added competition could cut into their winnings. Though the scientists were helping to build one master AI, they were still at odds. The fund and its creators were at cross-purposes.

Why is tech positive-sum and finance zero-sum? Richard Craib

Today, to fix that problem, Numerai has distributed Numeraire1,000,000 tokens in allto 12,000 participating scientists. The higher the scientists sit on the leaderboard, the more Numeraire they receive. But its not really a currency they can use to pay for stuff. Its a way of betting that their machine learning models will do well on the live market. If their trades succeed, they get their Numeraire back as well as a payment in bitcoina kind of dividend. If their trades go bust, the company destroys their Numeraire, and they dont get paid.

The new system encourages the data scientists to build models that work on live trades, not just test data. The value of Numeraire also grows in proportion to the overall success of the hedge fund, because Numerai will pay out more bitcoin to data scientists betting Numeraire as the fund grows. If Numerai were to pay out $1 million per month to people who staked Numeraire, then the value of Numeraire will be very high, because staking Numeraire will be the only way to earn that $1 million, Craib says.

Its a tricky but ingenious logic: Everyone betting Numeraire has an incentive to get everyone else to build the best models possible, because the more the fund grows, the bigger the dividends for all. Everyone involved has the incentive to recruit yet more talenta structure that rewards collaboration.

Whats more, though Numeraire has no stated value in itself, it will surely trade on secondary markets. The most likely buyers will be successful data scientists seeking to increase their caches so they can place bigger bets in search of more bitcoin rewards. But even those who dont bet will see the value of their Numeraire grow if the fund succeeds and secondary demand increases. As it trades, Numeraire becomes something kind of like a stock and kind of like its own currency.

For Craib, a trained mathematician with an enormous wave of curly hair topping his 6-foot-4-inch frame, the hope is that Numeraire will encourage Wall Street to operate more like an open source software project. In software, when everyone shares with everyone else, all benefit from the collaboration: The software gets better. Google open sourced its artificial intelligence engine, for instance, because improvements made by others outside the company will make the system more valuable for Google, too.

Why is tech positive-sum and finance zero-sum? Craib asks. The tech companies benefit from network effects where people behave differently because they are trying to build a network, rather than trying to compete.

Craib and company built their new token atop Ethereum, a vast online ledgera blockchainwhere anyone can build a bitcoin-like token driven by a self-operating software program, or smart contract. If it catches on the way bitcoin has, everyone involved has the incentive to (loudly) promote this new project and (manically) push it forward in new ways.

But getting things right isnt easy. The risk is that the crypto-economic model is wrong, says Ersham, Tokens let you set up incentive structures and program them directly. But just like monetary policy at, say, the Federal Reserve, its not always easy to get those incentive structures right.

In other words, Craibs game theory might not work. People and economies may not behave like he assumes they will. Also, blockchains arent hack-proof. A bug brought down the DAO, a huge effort to crowdsource venture capital on a blockchain. Hackers found a hole in the system and made off with $50 million.

Craib may also be overthinking the situation, looking for complex technological solutions to solve a problem that doesnt require anything as elaborate as Numeraire. Their model seems overly complicated. Its not clear why they need it, says Michael Wellman, a University of Michigan professor who specializes in game theory and new financial services. Its not like digital currency has magical properties. Numerai could try a much more time-honored approach to recruiting the most talented data scientists, Wellman says: pay them.

After today, Craib and the rest of Wall Street will start to see whether something like Numeraire can truly imbue the most ruthless of markets with a cooperative spirit. Those thousands of data scientists didnt know Numeraire was coming, but if the network effects play out like Craib hopes they will, many of those scientists have just gotten very, very rich. Still, that isnt his main purpose. Craibs goals are bigger than just building a hedge fund with crowdsourced AI. He wants to change the very nature of Wall Streetand maybe capitalism. Competition has made a lot of people wealthy. Maybe collaboration could enrich many more.

Read more from the original source:

An AI Hedge Fund Created a New Currency to Make Wall Street ... - WIRED

Posted in Ai | Comments Off on An AI Hedge Fund Created a New Currency to Make Wall Street … – WIRED

If AI Can Fix Peer Review in Science, AI Can Do Anything | WIRED – WIRED

Posted: at 4:14 am

Slide: 1 / of 1. Caption: Getty Images

Heres how science works: You have a question about some infinitesimal sliver of the universe. You form a hypothesis, test it, and eventually gather enough data to support or disprove what you thought was going on. Thats the fun part. The next bit is less glamorous: You write a manuscript, submit it to an academic journal, and endure the gauntlet of peer review, where a small group of anonymous experts in your field scrutinize the quality of your work.

Peer review has its flaws. Human beings (even scientists) are biased, lazy, and self-interested. Sometimes they suck at math (even scientists). So, perhaps inevitably, some people want to remove humans from the processand replace them with artificial intelligence. Computers are, after all, unbiased, sedulous, and lack a sense of identity. They are also, by definition, good at math. And scientists arent just waiting around for some binary brain to manifest a set of protocols for identifying experimental excellence. Journal publishers are already building this stuff, piecemeal.

Recently, a competition called ScienceIE challenged teams to create programs that could extract the basic facts out of sentences in scientific papers, and compare those to the basic facts from sentences in other papers. The broad goal of my project is to help scientists and practitioners gain more knowledge about a research area more quickly, says Isabelle Augenstein, a post-doctoral AI researcher at University College of London, who devised the challenge.

Thats a tiny part of artificial intelligences biggest challenge: processing natural human language. Competitors designed programs to tackle three subtasks: reading each paper and identifying its key concepts, organizing key words by type, and identifying relationships between different key phrases. And its not just an academic exercise: Augenstein is on a two-year contract with Elsevier, one of the worlds largest publishers of scientific research, to develop computational tools for their massive library of manuscripts.

She has her work cut out for her. Elsevier publishes over 7,500 different journals. Each has an editor, who has to find the right reviewer for each manuscript. (In 2015, 700,000 peer reviewers reviewed over 1.8 million manuscripts across Elseviers journals; 400,000 were eventually published.) The number of humans capable of reviewing a proposal is generally limited to the specialists in that field, says Mike Warren, AI veteran and CTO/co-founder of Descartes Labs, a digital mapping company that uses AI to parse satellite images. So, youve got this small set of people with PhDs, and you keep dividing them into disciplines and sub-disciplines, and when youre done there might only be 100 people on the planet qualified to review a certain manuscript. Augensteins work is part of Elseviers work to automatically suggest the right reviewers for each manuscript.

Elsevier has developed a suite of automated tools, called Evise, to aid in peer review. The program checks for plagiarism (although thats not really AI, just a search and match function), clears potential reviewers for things like conflicts of interest, and handles workflow between authors, editors, and reviewers. Several other major publishers have automated software to aid peer reviewSpringer-Nature, for instance, is currently trialing an independently-developed software package called StatReviewer that ensures that each submitted paper has complete and accurate statistical data.

But none seem as open about their capabilities or aspirations as Elsevier. We are investigating more ambitious tasks, says Augenstein. Say you have a question about a paper: A machine learning model reads the paper and answers your question.

Not everyone is charmed by the prospect of Dr. Roboto, PhD. Last month, Janne Hukkinen, professor of environmental policy at University of Helsinki, Finland, and editor of the Elsevier journal Ecological Economics wrote a cautionary op-ed for WIRED, premised on a future where AI peer review became fully autonomous:

I dont see why learning algorithms couldnt manage the entire review from submission to decision by drawing on publishers databases of reviewer profiles, analyzing past streams of comments by reviewers and editors, and recognizing the patterns of change in a manuscript from submission to final editorial decision. Whats more, disconnecting humans from peer review would ease the tension between the academics who want open access and the commercial publishers who are resisting it.

By Hukkinens logic, an AI that could do peer review could also write manuscripts. Eventually, people become a legacy system within the scientific methodredundant, inefficient, obsolete. His final argument: New knowledge which humans no longer experience as something they themselves have produced would shake the foundations of human culture.

But Hukkinens dark vision of machines capable of outthinking human scientists is, at the very least, decades away. AI, despite its big successes in games like chess, Go, and poker, still cant understand most normal English sentences, let alone scientific text, says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence. Consider this: The winning team from Augensteins ScienceIE competition scored 43 percent across the three subtasks.

And even non-computer brains have a hard time comprehending the passive-voiced mumbo jumbo common in scientific manuscripts; it is not uncommon for inscriptions within the literature to be structured such that the phenomenon being discussed is often described, after layers of prepositional preamble, and in vernacular that is vague, esoteric, and exorbitant, as being acted upon by causative factors. Linguists call anything written by humans, for humans, natural language. Computer scientists call natural language a hot mess.

One large category of problems in natural language for AI is ambiguity, says Ernest Davis, a computer scientist at NYU who studies common sense processing. Lets take a classic example of ambiguity, illustrated in this sentence by Stanford University emeritus computer scientist Terry Winograd:

The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.

To you and me, the verbs give away who they refers to: the city council fears; the demonstrators advocate. But a computer brain would have a hell of a time figuring out which verb indicates which pronoun. And that type of ambiguity is just one thread in the tangled knot of natural languagefrom simple things like understanding homographs to unraveling the logic of narratives.

Thats not even touching on the specific issues in scientific papers, like connecting a written argument to some pattern in the data. This is even the case in pure mathematics papers. Going from English to the formal logic of mathematics is not something we can automate, says Davis. And that would be one of the easiest things to work on because its highly restrictive and we understand the targets. Disciplines that arent rooted in mathematics, like psychology, will be even more difficult. In psychology papers, were nowhere near being able to check the reasonableness of arguments, says Davis. We dont know how to express the experiment in a way that a computer could use it.

And of course, a fully autonomous AI peer reviewer doesnt just have to outread humans, it has to outthink them. When you think about AI problems, peer review is probably among the very hardest you can come up with, since the most important part of peer review is determining that research is novel, its something that has not been done before by someone else, says Warren. A computer program might be able to survey the literature and figure out which questions remain, but would it be able to pick out research of Einsteinian proportionssome new theory that completely upends previous assumptions about how the world works?

Then again, what if everyoneAI advocates and critics alikeare looking at the problem backwards? Maybe we just need to change the way we do scientific publishing, says Tom Dietterich, AI researcher at Oregon State University. So, rather than writing our research as a story in English, we link our claims and evidence into a formalized structure, like a database, containing all the things that are known about a problem people are working on. Computerize the process of peer review, in other words, rather than its solution. But at that point its not computers youre reprogramming: Youre reprogramming human behavior.

View post:

If AI Can Fix Peer Review in Science, AI Can Do Anything | WIRED - WIRED

Posted in Ai | Comments Off on If AI Can Fix Peer Review in Science, AI Can Do Anything | WIRED – WIRED

Want Your Company To Stay Relevant? Start Learning How To Harness AI – Forbes

Posted: at 4:14 am

Want Your Company To Stay Relevant? Start Learning How To Harness AI
Forbes
Artificial intelligence (AI) has the power to change how our workforce operates, and if you want your business to stay competitive, you need to get ahead of the AI revolution. Don't let yourself feel daunted by it as a buzzword. At my current company ...

and more »

Read the original here:

Want Your Company To Stay Relevant? Start Learning How To Harness AI - Forbes

Posted in Ai | Comments Off on Want Your Company To Stay Relevant? Start Learning How To Harness AI – Forbes

Meitu’s new phone uses AI to snap better selfies – Engadget

Posted: at 4:14 am

According to Meitu, Magical AI Beautification will enhance group photos as well as selfies, detecting and adjusting each face individually. It will whiten your teeth, get rid of those bags under your eyes, smooth your skin, add radiance to your face and apply some stylish filters. The feature works on real-time videos too.

Over a billion people have downloaded and installed the Meitu app. It was a viral hit on social media last year, turning everyone's selfies into kawaii anime characters. But, the fun ended quickly when people discovered the app asks for access to a lot of personal data, including your calendar, contacts, SMS messages and location. Meitu claims it collected the data because it needed a workaround for Apple and Google's tracking services, which are blocked in China.

The T8 is not the first "selfie smartphone" to come out of China. Both Lenovo and Oppo released low-to-mid range devices last year with powerful front-facing cameras, but Meitu says the T8 is the first smartphone to offer DSLR-type performance and photo quality through its dual pixel technology.

The T8's other specs include a full metallic body, a 21-megapixel rear-facing camera, a 2.3GHz processor, a 5.2-inch AMOLED display, 4GB of RAM and 128GB of onboard storage. And yes, it has a headphone jack. It's currently available on Meitu's website (accessible in China only) and costs 3299 RMB ($479 USD). It'll be available to buy at online retailers Tmall, suning.com and jd.com on February 22nd. There's no word yet on whether the T8 will make it to the US.

Read the rest here:

Meitu's new phone uses AI to snap better selfies - Engadget

Posted in Ai | Comments Off on Meitu’s new phone uses AI to snap better selfies – Engadget

Why AI could be Silicon Valley’s latest ‘micro bubble’ – Yahoo Finance – Yahoo Finance

Posted: February 20, 2017 at 7:18 pm

2016 was supposed to be the year the tech bubble finally burst.

Much like the dot-com bubble of the early 2000s an industry implosion marked by high-profile flops such as online grocery delivery startup Webvan and pet supplies retailer Pets.com skeptics pointed to less VC funding in 2016, stratospheric valuations including Ubers $69 billion, the sales of once-pricey companies such as One Kings Lane, and sky-high rental and real estate prices.

And contrary to tech insiders who largely remain bullish on the industry, some even saw smaller signs of a bubble in the hours-long bumper-to-bumper traffic on the US-101, a highway that meanders its way down the peninsula to tech-laden cities such as Menlo Park, San Jose and Mountain View.

But after more than six years in Silicon Valley collectively, Im convinced there isnt one big bubble these days, but rather a series of smaller bubbles within tech that balloon and swell until they burst, taking with them the droves of copycat derivatives and poorly managed companies all trying to capitalize on the latest, frothiest trend.

Ask just about any venture capitalist at this moment, and theyll tell you theyre seeing a glut of artificial intelligence and machine learning startups flow their way angling for cash, employing increasingly complex algorithms across a wide range of industries.

While some of these new companiesmay fulfill actual needs, there may simply be more AI startups than the world needs.

The pets.com sock puppet dog stars in a commercial for the company, Los Angeles, California, January 11, 2000. Photo by Bob Riha/Liaison/Getty Image

Of course, some AI startups are more promising than others. Andreessen Horowitz general partner Vijay Pande told Yahoo Finance he is particularly bullish on companies such as Freenome, which the firm invested in last June. The Palo Alto-based startup uses machine learning to help detect different types of cancers from a blood test rather than from a tissue sample a process that detects cancer long before more traditional methods can. Another startup Pande invested in, the health tech startup Cardiogram, is promising because it makes sense of and analyzes large amounts of user data to provide actionable insights that could ultimately save lives.

SomeA.I. ventures are trying to shake up other long-standing industries, like the San Carlos, Calif.-based Farmers Business Networks,a social network for, well, farmers, that relies on machine learning to improve data results around seed performance and pricing. And there are many, many more.

While its too early to tell which of those startups will evolve into viable businesses and which wont, its relatively easy to look back over the last decade now to see past micro-bubbles for what they actually were.

Alex Mittal, CEO and co-founder of the FundersClub, an online VC firm which invests in promising tech startups, agrees Silicon Valley has found itself swept up in macro-trends over the years that come and go in predictable cycles.

Every time theres a focus on a technology thats new, it gets overhyped, and the hype reaches an extreme, Mittal told Yahoo Finance. The pendulum always seems to swing too far, and theres some sort of correction. Sometimes, it literally was just hype. Theres no substance, and then it goes away. But sometimes, theres something really there.

The 2008 financial crisis, interestingly, marked the first micro-bubble, marked by the sharing economy, a business model based on the idea that assets or services are shared between people through the internet or mobile. Airbnb, founded in 2008, singlehandedly legitimized the idea of couch-surfing as a hotel alternative, by easily letting people rent out a room, an apartment or a home; Uber in 2009 upended the crusty, old taxi industry by creating a network of private drivers reachable with just a few easy taps on the smartphone.

Read More

But while Airbnb and Uber have become bona fide global businesses, many more sharing economy upstarts failed to catch on. Remember the Uber copycat Sidecar? Shut down in 2015, because Uber and Lyft had more money and an easier-to-understand user experience.

How about laundry delivery ventures Prim and Washio? Shuttered in 2014 and 2016, respectively, due to low profit margins and high infrastructure costs. Even businesses like Homejoy, an online marketplace for cleaning services, hit the skids despite many a venture capitalist crowing about what a promising business it was apparently due to a lack of repeat customers and slew of lawsuits.

Washio was the victim of another micro bubble.

On the heels of the sharing economy bubble came a slew of e-commerce startups like online design store Fab.com. It burned through a significant chunk of the $325 million it raisedin aggressive attempts to expand globally, acquiring similar sites in Germany and England, before a spectacular crash-and-burn that few in Silicon Valley, including its investors, will forget anytime soon.

Meanwhile, once-promising home furnishings site One Kings Lane, which failed to differentiate itself enough from the glut of flash-sale sites, sold for just $12 million last August to Bed Bath & Beyond a serious markdown from its $900 million valuation of yesteryear and online furniture retailer Dot & Bo used up its $20 million in funding before shuttering last September.

The most recent micro-bubble to burst? On-demand food delivery startups. No less than a dozen food delivery startups have shuttered over the last 18 months, with names like Bento, Spoonrocket, Din, Kitchit, Kitchen Surfing, and the creatively-named Take Eat Easy. Others like Munchery, Zesty and Sprig, trudge on, but with considerably downsized workforces. Because, while people certainly enjoy good dining, there were too many startups for San Francisco locals for them to keep track of and not enough interested mouths to feed. Indeed, Din founders Emily Olson and Rob LaFave pointed to an overly crowded market as a key reason for closing the startup in a postmortem interview with SF Eater in October.

Many of the venture capitalists and founders Ive spoken to in recent months are hopeful that this latest boom in A.I. and machine learning startups isnt part of another micro-bubble in the way many sharing economy and e-commerce startups came and went in the past, largely because these technologies can ostensibly benefit and improve any industry, from health care to agriculture to consumer-focused virtual assistants. (Hello, Alexa.)

A.I. is probably more accessible than it has ever been before, contended Peter Cahill,an authority on A.I., who has spent the last 15 years studying speech technology and neural networks from Dublin, Ireland.Its easier for companies to see the clear benefits from it because technology has largely caught up.

Maybe theyre right this time, or maybe the Farmers Business Networks of the world will eventually join the startup graveyard, alongside Fab.com, Homejoy and so many others. But as Mittal points out, this almost blinding sense of optimism that any startup with a good idea can succeed is what makes Silicon Valley unique and, dare I say it, innovative. Because for every 100 startups, 10 of them may become successful, and perhaps one has the potential to become the next transformative company like Facebook (FB).

Its a large part of the reason Im still here, Mittal confesses, with a sheepish grin.

No doubt many in Silicon Valley would agree.

JP Mangalindan is a senior correspondent covering the intersection of business and technology.

More from JP:

No dating app has more engagement than Grindr

African Americans are more likely to get hired, but get paid less in tech industry

Pinterest just made it easier for you to buy what you see on the site

News aggregator Flipboard has a plan to tackle fake news

The day Lyft was bigger than Uber.

Surprise and disgust: What 6 Silicon Valley CEOs said about Trumps ban

Qualcomm president: Apple is behind regulatory attacks

Amazon is now worth more than the 8 largest retailers combined

How Silicon Valley reacted to Trumps inauguration

Go here to see the original:

Why AI could be Silicon Valley's latest 'micro bubble' - Yahoo Finance - Yahoo Finance

Posted in Ai | Comments Off on Why AI could be Silicon Valley’s latest ‘micro bubble’ – Yahoo Finance – Yahoo Finance

Page 276«..1020..275276277278..»