Page 225«..1020..224225226227..230240..»

Category Archives: Ai

Google is helping fund AI news writers in the UK and Ireland – The Verge

Posted: July 8, 2017 at 4:14 am

Google is giving the Press Association news agency a grant of 706,000 ($806,000) to start writing stories with the help of artificial intelligence. The money is coming out of the tech giants Digital News Initiative fund, which supports digital journalism in Europe. The PA supplies news stories to media outlets all over the UK and Ireland, and will be working with a startup named Urbs Media to produce 30,000 local stories a month with the help of AI.

The editor-in-chief of the Press Association, Peter Clifton, explained to The Guardian that the AI articles will be the product of collaboration with human journalists. Writers will create detailed story templates for topics like crime, health, and unemployment, and Urbs Medias Radar tool (it stands for Reporters And Data And Robots) will fill in the blanks and helping localize each article. This sort of workflow has been used by media outlets for years, with the Los Angeles Times using AI to write news stories about earthquakes since 2014.

Skilled human journalists will still be vital in the process, said Clifton, but Radar allows us to harness artificial intelligence to scale up to a volume of local stories that would be impossible to provide manually.

The money from Google will also be used to make tools for scraping information from public databases in the UK, like those generated by local councils and the National Health Service. The Radar software will also auto-generate graphics for stories, as well as add relevant videos and pictures. The software will start being used from the beginning of next year.

Some reporters in the UK, though, are skeptical about the new scheme. Tim Dawson, president of the National Union of Journalists, told The Guardian: The real problem in the media is too little bona fide reporting. I dont believe that computer whizzbangery is going to replace that. What Im worried about in my capacity as president of the NUJ is something that ends up with third-rate stories which look as if they are something exciting, but are computer-generated so [news organizations] can get rid of even more reporters.

Visit link:

Google is helping fund AI news writers in the UK and Ireland - The Verge

Posted in Ai | Comments Off on Google is helping fund AI news writers in the UK and Ireland – The Verge

Google DeepMind teams with Open AI to prevent a robot uprising – Engadget

Posted: at 4:14 am

Google DeepMind and Open AI, a lab partially funded by Elon Musk, released a research article outlining a new method of machine learning. It actually takes its cues from humans when it comes to learning new tasks. This could be safer than allowing an AI to figure out how to solve a problem on its own, which has the potential to introduce unwelcome surprises.

The main problem that the research tackled was when an AI discovers the most efficient way to achieve maximum rewards is to cheat -- the equivalent of shoving everything on the floor of your room into a closet and declaring it "clean." Technically, the room itself is clean, but that's not what's supposed to happen. Machines are able to find these workarounds and exploit them in any given problem.

The issue is with the reward system, and that's where the two groups focused their efforts. Rather than crafting an overly complex reward system that machines can cut through, the teams used human input to reward the AI. When the AI solved a problem the way trainers wanted to, it got positive feedback. Using this method, the AI was able to learn play simple video games.

While this is an encouraging breakthrough, it's not widely applicable: This type of human feedback is much too time consuming. But through collaborations like this, it's possible that we can control and direct the development of AI and prevent machines from eventually becoming smart enough to destroy us all.

Read the rest here:

Google DeepMind teams with Open AI to prevent a robot uprising - Engadget

Posted in Ai | Comments Off on Google DeepMind teams with Open AI to prevent a robot uprising – Engadget

How AI detectives are cracking open the black box of deep learning – Science Magazine

Posted: July 7, 2017 at 2:13 am

By Paul VoosenJul. 6, 2017 , 2:00 PM

Jason Yosinski sits in a small glass box at Ubers San Francisco, California, headquarters, pondering the mind of an artificial intelligence. An Uber research scientist, Yosinski is performing a kind of brain surgery on the AI running on his laptop. Like many of the AIs that will soon be powering so much of modern life, including self-driving Uber cars, Yosinskis program is a deep neural network, with an architecture loosely inspired by the brain. And like the brain, the program is hard to understand from the outside: Its a black box.

This particular AI has been trained, using a vast sum of labeled images, to recognize objects as random as zebras, fire trucks, and seat belts. Could it recognize Yosinski and the reporter hovering in front of the webcam? Yosinski zooms in on one of the AIs individual computational nodesthe neurons, so to speakto see what is prompting its response. Two ghostly white ovals pop up and float on the screen. This neuron, it seems, has learned to detect the outlines of faces. This responds to your face and my face, he says. It responds to different size faces, different color faces.

No one trained this network to identify faces. Humans werent labeled in its training images. Yet learn faces it did, perhaps as a way to recognize the things that tend to accompany them, such as ties and cowboy hats. The network is too complex for humans to comprehend its exact decisions. Yosinskis probe had illuminated one small part of it, but overall, it remained opaque. We build amazing models, he says. But we dont quite understand them. And every year, this gap is going to get a bit larger.

Each month, it seems, deep neural networks, or deep learning, as the field is also called, spread to another scientific discipline. They can predict the best way to synthesize organic molecules. They can detect genes related to autism risk. They are even changing how science itself is conducted. The AIs often succeed in what they do. But they have left scientists, whose very enterprise is founded on explanation, with a nagging question: Why, model, why?

That interpretability problem, as its known, is galvanizing a new generation of researchers in both industry and academia. Just as the microscope revealed the cell, these researchers are crafting tools that will allow insight into the how neural networks make decisions. Some tools probe the AI without penetrating it; some are alternative algorithms that can compete with neural nets, but with more transparency; and some use still more deep learning to get inside the black box. Taken together, they add up to a new discipline. Yosinski calls it AI neuroscience.

Loosely modeled after the brain, deep neural networks are spurring innovation across science. But the mechanics of the models are mysterious: They are black boxes. Scientists are now developing tools to get inside the mind of the machine.

GRAPHIC: G. GRULLN/SCIENCE

Marco Ribeiro, a graduate student at the University of Washington in Seattle, strives to understand the black box by using a class of AI neuroscience tools called counter-factual probes. The idea is to vary the inputs to the AIbe they text, images, or anything elsein clever ways to see which changes affect the output, and how. Take a neural network that, for example, ingests the words of movie reviews and flags those that are positive. Ribeiros program, called Local Interpretable Model-Agnostic Explanations (LIME), would take a review flagged as positive and create subtle variations by deleting or replacing words. Those variants would then be run through the black box to see whether it still considered them to be positive. On the basis of thousands of tests, LIME can identify the wordsor parts of an image or molecular structure, or any other kind of datamost important in the AIs original judgment. The tests might reveal that the word horrible was vital to a panning or that Daniel Day Lewis led to a positive review. But although LIME can diagnose those singular examples, that result says little about the networks overall insight.

New counterfactual methods like LIME seem to emerge each month. But Mukund Sundararajan, another computer scientist at Google, devised a probe that doesnt require testing the network a thousand times over: a boon if youre trying to understand many decisions, not just a few. Instead of varying the input randomly, Sundararajan and his team introduce a blank referencea black image or a zeroed-out array in place of textand transition it step-by-step toward the example being tested. Running each step through the network, they watch the jumps it makes in certainty, and from that trajectory they infer features important to a prediction.

Sundararajan compares the process to picking out the key features that identify the glass-walled space he is sitting inoutfitted with the standard medley of mugs, tables, chairs, and computersas a Google conference room. I can give a zillion reasons. But say you slowly dim the lights. When the lights become very dim, only the biggest reasons stand out. Those transitions from a blank reference allow Sundararajan to capture more of the networks decisions than Ribeiros variations do. But deeper, unanswered questions are always there, Sundararajan saysa state of mind familiar to him as a parent. I have a 4-year-old who continually reminds me of the infinite regress of Why?

The urgency comes not just from science. According to a directive from the European Union, companies deploying algorithms that substantially influence the public must by next year create explanations for their models internal logic. The Defense Advanced Research Projects Agency, the U.S. militarys blue-sky research arm, is pouring $70 million into a new program, called Explainable AI, for interpreting the deep learning that powers drones and intelligence-mining operations. The drive to open the black box of AI is also coming from Silicon Valley itself, says Maya Gupta, a machine-learning researcher at Google in Mountain View, California. When she joined Google in 2012 and asked AI engineers about their problems, accuracy wasnt the only thing on their minds, she says. Im not sure what its doing, they told her. Im not sure I can trust it.

Rich Caruana, a computer scientist at Microsoft Research in Redmond, Washington, knows that lack of trust firsthand. As a graduate student in the 1990s at Carnegie Mellon University in Pittsburgh, Pennsylvania, he joined a team trying to see whether machine learning could guide the treatment of pneumonia patients. In general, sending the hale and hearty home is best, so they can avoid picking up other infections in the hospital. But some patients, especially those with complicating factors such as asthma, should be admitted immediately. Caruana applied a neural network to a data set of symptoms and outcomes provided by 78 hospitals. It seemed to work well. But disturbingly, he saw that a simpler, transparent model trained on the same records suggested sending asthmatic patients home, indicating some flaw in the data. And he had no easy way of knowing whether his neural net had picked up the same bad lesson. Fear of a neural net is completely justified, he says. What really terrifies me is what else did the neural net learn thats equally wrong?

Todays neural nets are far more powerful than those Caruana used as a graduate student, but their essence is the same. At one end sits a messy soup of datasay, millions of pictures of dogs. Those data are sucked into a network with a dozen or more computational layers, in which neuron-like connections fire in response to features of the input data. Each layer reacts to progressively more abstract features, allowing the final layer to distinguish, say, terrier from dachshund.

At first the system will botch the job. But each result is compared with labeled pictures of dogs. In a process called backpropagation, the outcome is sent backward through the network, enabling it to reweight the triggers for each neuron. The process repeats millions of times until the network learnssomehowto make fine distinctions among breeds. Using modern horsepower and chutzpah, you can get these things to really sing, Caruana says. Yet that mysterious and flexible power is precisely what makes them black boxes.

Gupta has a different tactic for coping with black boxes: She avoids them. Several years ago Gupta, who moonlights as a designer of intricate physical puzzles, began a project called GlassBox. Her goal is to tame neural networks by engineering predictability into them. Her guiding principle is monotonicitya relationship between variables in which, all else being equal, increasing one variable directly increases another, as with the square footage of a house and its price.

Gupta embeds those monotonic relationships in sprawling databases called interpolated lookup tables. In essence, theyre like the tables in the back of a high school trigonometry textbook where youd look up the sine of 0.5. But rather than dozens of entries across one dimension, her tables have millions across multiple dimensions. She wires those tables into neural networks, effectively adding an extra, predictable layer of computationbaked-in knowledge that she says will ultimately make the network more controllable.

Caruana, meanwhile, has kept his pneumonia lesson in mind. To develop a model that would match deep learning in accuracy but avoid its opacity, he turned to a community that hasnt always gotten along with machine learning and its loosey-goosey ways: statisticians.

In the 1980s, statisticians pioneered a technique called a generalized additive model (GAM). It built on linear regression, a way to find a linear trend in a set of data. But GAMs can also handle trickier relationships by finding multiple operations that together can massage data to fit on a regression line: squaring a set of numbers while taking the logarithm for another group of variables, for example. Caruana has supercharged the process, using machine learning to discover those operationswhich can then be used as a powerful pattern-detecting model. To our great surprise, on many problems, this is very accurate, he says. And crucially, each operations influence on the underlying data is transparent.

Caruanas GAMs are not as good as AIs at handling certain types of messy data, such as images or sounds, on which some neural nets thrive. But for any data that would fit in the rows and columns of a spreadsheet, such as hospital records, the model can work well. For example, Caruana returned to his original pneumonia records. Reanalyzing them with one of his GAMs, he could see why the AI would have learned the wrong lesson from the admission data. Hospitals routinely put asthmatics with pneumonia in intensive care, improving their outcomes. Seeing only their rapid improvement, the AI would have recommended the patients be sent home. (It would have made the same optimistic error for pneumonia patients who also had chest pain and heart disease.)

Caruana has started touting the GAM approach to California hospitals, including Childrens Hospital Los Angeles, where about a dozen doctors reviewed his models results. They spent much of that meeting discussing what it told them about pneumonia admissions, immediately understanding its decisions. You dont know much about health care, one doctor said, but your model really does.

Sometimes, you have to embrace the darkness. That's the theory of researchers pursuing a third route toward interpretability. Instead of probing neural nets, or avoiding them, they say, the way to explain deep learning is simply to do more deep learning.

If we can't ask why they do something and get a reasonable response back, people will just put it back on the shelf.

Like many AI coders, Mark Riedl, director of the Entertainment Intelligence Lab at the Georgia Institute of Technology in Atlanta, turns to 1980s video games to test his creations. One of his favorites is Frogger, in which the player navigates the eponymous amphibian through lanes of car traffic to an awaiting pond. Training a neural network to play expert Frogger is easy enough, but explaining what the AI is doing is even harder than usual.

Instead of probing that network, Riedl asked human subjects to play the game and to describe their tactics aloud in real time. Riedl recorded those comments alongside the frogs context in the games code: Oh, theres a car coming for me; I need to jump forward. Armed with those two languagesthe players and the codeRiedl trained a second neural net to translate between the two, from code to English. He then wired that translation network into his original game-playing network, producing an overall AI that would say, as it waited in a lane, Im waiting for a hole to open up before I move. The AI could even sound frustrated when pinned on the side of the screen, cursing and complaining, Jeez, this is hard.

Riedl calls his approach rationalization, which he designed to help everyday users understand the robots that will soon be helping around the house and driving our cars. If we cant ask a question about why they do something and get a reasonable response back, people will just put it back on the shelf, Riedl says. But those explanations, however soothing, prompt another question, he adds: How wrong can the rationalizations be before people lose trust?

Back at Uber, Yosinski has been kicked out of his glass box. Ubers meeting rooms, named after cities, are in high demand, and there is no surge pricing to thin the crowd. Hes out of Doha and off to find Montreal, Canada, unconscious pattern recognition processes guiding him through the office mazeuntil he gets lost. His image classifier also remains a maze, and, like Riedl, he has enlisted a second AI to help him understand the first one.

Researchers have created neural networks that, in addition to filling gaps left in photos, can identify flaws in an artificial intelligence.

PHOTOS: ANH NGUYEN

First, Yosinski rejiggered the classifier to produce images instead of labeling them. Then, he and his colleagues fed it colored static and sent a signal back through it to request, for example, more volcano. Eventually, they assumed, the network would shape that noise into its idea of a volcano. And to an extent, it did: That volcano, to human eyes, just happened to look like a gray, featureless mass. The AI and people saw differently.

Next, the team unleashed a generative adversarial network (GAN) on its images. Such AIs contain two neural networks. From a training set of images, the generator learns rules about imagemaking and can create synthetic images. A second adversary network tries to detect whether the resulting pictures are real or fake, prompting the generator to try again. That back-and-forth eventually results in crude images that contain features that humans can recognize.

Yosinski and Anh Nguyen, his former intern, connected the GAN to layers inside their original classifier network. This time, when told to create more volcano, the GAN took the gray mush that the classifier learned and, with its own knowledge of picture structure, decoded it into a vast array of synthetic, realistic-looking volcanoes. Some dormant. Some erupting. Some at night. Some by day. And some, perhaps, with flawswhich would be clues to the classifiers knowledge gaps.

Their GAN can now be lashed to any network that uses images. Yosinski has already used it to identify problems in a network trained to write captions for random images. He reversed the network so that it can create synthetic images for any random caption input. After connecting it to the GAN, he found a startling omission. Prompted to imagine a bird sitting on a branch, the networkusing instructions translated by the GANgenerated a bucolic facsimile of a tree and branch, but with no bird. Why? After feeding altered images into the original caption model, he realized that the caption writers who trained it never described trees and a branch without involving a bird. The AI had learned the wrong lessons about what makes a bird. This hints at what will be an important direction in AI neuroscience, Yosinski says. It was a start, a bit of a blank map shaded in.

The day was winding down, but Yosinskis work seemed to be just beginning. Another knock on the door. Yosinski and his AI were kicked out of another glass box conference room, back into Ubers maze of cities, computers, and humans. He didnt get lost this time. He wove his way past the food bar, around the plush couches, and through the exit to the elevators. It was an easy pattern. Hed learn them all soon.

See the article here:

How AI detectives are cracking open the black box of deep learning - Science Magazine

Posted in Ai | Comments Off on How AI detectives are cracking open the black box of deep learning – Science Magazine

This startup is building AI to bet on soccer games – The Verge – The Verge

Posted: at 2:13 am

Listen to Andreas Koukorinis, founder of UK sports betting company Stratagem, and youd be forgiven for thinking that soccer games are some of the most predictable events on Earth. Theyre short duration, repeatable, with fixed rules, Koukorinis tells The Verge. So if you observe 100,000 games, there are patterns there you can take out.

The mission of Koukorinis company is simple: find these patterns and make money off them. Stratagem does this either by selling the data it collects to professional gamblers and bookmakers, or by keeping it and making its own wagers. To fund these wagers, the firm is raising money for a 25 million ($32 million) sports betting fund that its positioning as an investment alternative to traditional hedge funds. In other words, Stratagem hopes rich people will give Stratagem their money. The company will gamble with it using its proprietary data, and, if all goes to plan, everyone ends up just that little bit richer.

Its a familiar story, but Stratagem is adding a little something extra to sweeten the pot: artificial intelligence.

At the moment, the company uses teams of human analysts spread out around the globe to report back on the various sporting leagues it bets on. This information is combined with detailed data about the odds available from various bookmakers to give Stratagem an edge over the average punter. But, in the future, it wants computers to do the analysis for it. It already uses machine learning to analyze some of its data (working out the best time to place a bet, for example), but its also developing AI tools that can analyze sporting events in real time, drawing out data that will help predict which team will win.

Stratagem is using deep neural networks to achieve this task the same technology thats enchanted Silicon Valleys biggest firms. Its a good fit, since this is a tool thats well-suited for analyzing vast pots of data. As Koukorinis points out, when analyzing sports, theres a hell of a lot data to learn from. The companys software is currently absorbing thousands of hours of sporting fixtures to teach it patterns of failure and success, and the end goal is to create an AI that can watch a range of a half-dozen different sporting events simultaneously on live TV, extracting insights as it does.

Stratagems AI identifies players to make a 2D map of the game

At the moment, though, Strategem is starting small. Its focusing on just a few sports (soccer, basketball, and tennis) and a few metrics (like goal chances in soccer). At the companys London offices, home to around 30 employees including ex-bankers and programmers, were shown the fledgling neural nets for soccer games in action. On-screen, the output is similar to what you might see from the live feed of a self-driving car. But instead of the computer highlighting stop signs and pedestrians as it scans the road ahead, its drawing a box around Zlatan Ibrahimovi as he charges at the goal, dragging defenders in his wake.

Stratagems AI makes its calculations watching a standard, broadcast feed of the match. (Pro: its readily accessible. Con: it has to learn not to analyze the replays.) It tracks the ball and the players, identifying which team theyre on based on the color of their kits. The lines of the pitch are also highlighted, and all this data is transformed into a 2D map of the whole game. From this viewpoint, the software studies matches like an armchair general: it identifies what it thinks are goal-scoring chances, or the moments where the configuration of players looks right for someone to take a shot and score.

Football is such a low-scoring game that you need to focus on these sorts of metrics to make predictions, says Koukorinis. If theres a short on target from 30 yards with 11 people in front of the striker and that ends in a goal, yes, it looks spectacular on TV, but its not exciting for us. Because if you repeat it 100 times the outcomes wont be the same. But if you have Lionel Messi running down the pitch and hes one-on-one with the goalie, the conversion rate on that is 80 percent. We look at what created that situation. We try to take the randomness out, and look at how good the teams are at what theyre trying to do, which is generate goal-scoring opportunities.

Whether or not counting goal-scoring opportunities is the best way to rank teams is difficult to say. Stratagem says its a metric thats popular with professional gamblers, but they and the company weigh it with a lot of other factors before deciding how to bet. Stratagem also notes that the opportunities identified by its AI dont consistently line up with those spotted by humans. Right now, the computer gets it correct about 50 percent of the time. Despite this, the company say its current betting models (which it develops for soccer, but also basketball and tennis) are right more than enough times for it to make a steady return, though they wont share precise figures.

A team of 65 analysts collect data around the world

At the moment, Stratagem generates most of its data about goal-scoring opportunities and other metrics the old-fashioned way: using a team of 65 human analysts who write detailed match reports. The companys AI would automate some of this process and speed it up significantly. (Each match report takes about three hours to write.) Some forms of data-gathering would still rely on humans, however.

A key task for the companys agents is finding out a teams starting lineup before its formally announced. (This is a major driver of pre-game betting odds, says Koukorinis, and knowing in advance helps you beat the market.) Acquiring this sort of information isnt easy. It means finding sources at a club, building up a relationship, and knowing the right people to call on match day. Chatbots just arent up to the job yet.

Machine vision, though, is really just one element of Stratagems AI business plan. It already applies machine learning to more mundane facets of betting like working out the best time to place a bet in any particular market. In this regard, what the company is doing is no different from many other hedge funds, which for decades have been using machine learning to come up with new ways to trade. Most funds blend human analysis with computer expertise, but at least one is run completely by decisions generated by artificial intelligence.

However, simply adding more computers to the mix isnt always a recipe for success. Theres data showing that if you want to make the most out of your money, its better to just invest in the top-performing stocks of the S&P 500, rather than sign up for an AI hedge fund. Thats not the best sign that Stratagems sports-betting fund will offer good returns, especially when such funds are already controversial.

In 2012, a sports-betting fund set up by UK firm Centaur Holdings, collapsed just two years after it launched. It lost $2.5 million after promising investors returns of 15 to 20 percent. To critics, operations like this are just borrowing the trappings of traditional funds to make gambling look more like investing.

I dont doubt its great fun... but dont qualify it with the term investment.

David Stevenson, director of finance research company AltFi, told The Verge that theres nothing essentially wrong with these funds, but they need to be thought of as their own category. I dont particularly doubt its great fun [to invest in one] if you like sports and a bit of betting, said Stevenson. But dont qualify it with the term investment, because investment, by its nature, has to be something you can predict over the long run.

Stevenson also notes that AI hedge funds that are successful those that torture the math within an inch of its life to eek out small but predictable profits tend not to seek outside investment at all. They prefer keeping the money to themselves. I treat most things that combine the acronym AI and the word investing with an enormous dessert spoon of salt, he said.

Whether or not Stratagems AI can deliver insights that make sporting events as predictable as the tides remains to be seen, but the companys investment in artificial intelligence does have other uses. For starters, it can attract investors and customers looking for an edge in the world of gambling. It can also automate work thats currently done by the companys human employees and make it cheaper. As with other businesses that are using AI, its these smaller gains that might prove to be most reliable. After all, small, reliable gains make for a good investment.

More here:

This startup is building AI to bet on soccer games - The Verge - The Verge

Posted in Ai | Comments Off on This startup is building AI to bet on soccer games – The Verge – The Verge

Mendel.ai raises $2M for AI-powered clinical trial matching platform – MobiHealthNews

Posted: at 2:13 am

San Franisco-based Mendel.ai, a startup that is developing an artificial intelligence-powered platform to match people with cancer to clinical trials, has raised $2 million in seed funding from DCM Ventures, BootstrapLabs, Indie Bio, LaunchCapital and SOSV. Medel.ai will use the capital to forge partnerships with hospitals and cancer genomics companies to bring the system into use.

For $99, Mendel.ai will process an unlimited number of medical records for three months to match patients with potential clinical trials. Prospective trial participants can either upload records onto Mendel.ais platform or give their doctors permission to share documents directly with the company. From there, a natural language processing algorithm combs through clinicaltrials.gov data to compare to an individuals medical record and responds with a list of personalized matches. During the course of a users experience on the Mendel.ai platform, the system continuously updates matches, and patients can receive in-app requests to join trials. To improve the power of the platform immediately, Mendel.ai recommends patients undergo DNA testing.

The company, named for the founder of modern genetics science Gregor Mendel, was created out of the frustration over inefficient clinical trial matching. After losing his aunt to cancer and later finding out she could have been connected with a nearby and potentially live-saving clinical trial, Mendel.ai CEO Dr. Karim Galil set out to improve the recruitment process. As it stands, the process is besieged by mountains of data and too little time for both physicians and patients. Doctors cant keep up with all the new clinical trial data as it comes out, and patients can be overwhelmed with selecting a trial from vast databases that work with keywords and typically spit out hundreds of possible matches, yet unfiltered for many eligibility factors.

A lung cancer patient, for example, might find 500 potential trials on clinicaltrials.gov, each of which has a unique, exhaustive list of eligibility criteria that must be read and assessed, Galil told TechCrunch. As this pool of trials changes each week, it is humanly impossible to keep track of all good matches.

Digital innovation activity in the clinical trials arena has been heating up as of late. There are now several companies offering different tools to improve study design, remote monitoring capabilities and patient recruitment and retention in clinical trials, and many are just getting off the ground.

Just last week, the Clinical Trials Transformation Initiative released new endpoint recommendations focused on the use of mobile technology in clinical trials. And in the past six months, there has been a slew of seed and early stage funding for companies innovating in the space. Mobile data capture-focused Clinical Research IO raised $1.6 million January. In March, Philadelphia-based VitalTrax raised $150,000 in seed funding to build out software to improve patient engagement in clinical trials. Medidata, a New York City-based company that offers cloud storage and data analytics services for clinical trials, announced in April its plans to acquire Mytrus, a clinical trial technology company focused on patient-focused electronic informed consent and remote trials. Also in April, remote clinical trial company Science 37 raised $29 million to move forward with technology that allows patients to participate in trials from their homes. But while others are focusing on improving data collection quality or study efficiency, the approach of Mendel.ai is on-par with the likes of much larger companies like IBM Watson, which is also experimenting with artificial intelligence to match patients with clinical trials. In the beginning of June, IBM Watson shared data from a Novartis-sponsored pilot, wherein Watson processed data from 2,620 lung and breast cancer patients and was able to cut the time needed to screen for clinical trials by nearly 80 percent.

For Mendel.ai, the task at hand is to integrate with health organizations and cancer genomics centers. Currently, the company is working with the Comprehensive Blood & Cancer Center in Bakersfield, California to enable the centers doctors to match their patients with trials. And while its still early days, Galil told TechCrunch the company wants to see Mendel.ai go head-to-head with IBM Watson.

Read more:

Mendel.ai raises $2M for AI-powered clinical trial matching platform - MobiHealthNews

Posted in Ai | Comments Off on Mendel.ai raises $2M for AI-powered clinical trial matching platform – MobiHealthNews

H2O.ai’s Driverless AI automates machine learning for businesses … – TechCrunch

Posted: at 2:13 am

Driverless AI is the latest product from H2O.ai aimed at lowering the barrier to making data science work in a corporate context. The tool assists non-technical employees with preparing data, calibrating parameters and determining the optimal algorithms for tackling specific business problems with machine learning.

At the research level, machine learning problems are complex and unpredictable combining GANs and reinforcement learning in a never before seen use case takes finesse. But the reality is that a lot of corporates today use machine learning for relatively predictable problems evaluating default rates with a support vector machine, for example.

But even these relatively straightforward problems are tough for non-technical employees to wrap their heads around. Companies are increasingly working data science into non-traditional sales and HR processes, attempting to train their way to costly innovation.

All of H2O.ais products help to make AI more accessible, but Driverless AI takes things a step further by physically automating many of the tough decisions that need to be made when preparing a model. Driverless AI automates feature engineering, the process by which key variables are selected to build a model.

H2O built Driverless AI with popular use cases built-in, but it cant solve every machine learning problem. Ideally it can find and tune enough standard models to automate at least part of the long tail.

The company alluded to todays release back in January when it launched Deep Water, a platform allowing its customers to take advantage of deep learning and GPUs.

Were still in the very early days of machine learning automation. Google CEOSundar Pichai generated a lot of buzz at this years I/O conference when he provided details on the companys efforts to create an AI tool that could automatically select the best model and characteristics to solve a machine learning problem with trial, error and a ton of compute.

Driverless AI is an early step in the journey of democratizing and abstracting AI for non-technical users. You can download the tool and start experimenting here.

The rest is here:

H2O.ai's Driverless AI automates machine learning for businesses ... - TechCrunch

Posted in Ai | Comments Off on H2O.ai’s Driverless AI automates machine learning for businesses … – TechCrunch

This Startup Is Lowering Companies Healthcare Costs With AI – Entrepreneur

Posted: at 2:13 am

Healthcare costs are rapidly increasing. For companies that provide health insurance for their employees, theyve been getting hit with higher and higher premiums every year with no end in sight.

One Chicago-based startup experiencing explosive growth has been tackling this very problem. This company leverages artificial intelligence and chatbot technology to help employees navigate their health insurance and use less costly services. As a result, both the employee and employer end up saving money.

Justin Holland, CEO and co-founder of HealthJoy, has a strong grasp on how chatbots are going to change healthcare and save companies money in the process. I spoke with Holland to get his take on what CEOs need to know about their health benefits and how to contain costs.

Related:CanArtificial IntelligenceIdentify Pictures Better than Humans?

Whats the biggest problem with employer-sponsored health insurance? Why have costs gone up year after year faster than the rate of inflation?

One of the biggest issues for companies is that health insurance is kind of like giving your employees a credit card to go to a restaurant that doesnt have any prices. They are going to order whatever the waiter suggests to them that sounds good. Theyll order the steak and lobster, a bottle of wine and dessert. Employees have no connection to the actual cost of any of the medical services they are ordering. Several studies show that the majority of employees dont understand basic insurance terms needed to navigate insurance correctly. And its not their fault. The system is unnecessarily complex. Companies have finally started to realize that if they want to start lowering their healthcare costs, they need to start lowering their claims. The only way they are going to start doing that is by educating their employees and helping them to navigate the healthcare system. They need to provide advocates and other services that are always available to help.

Related:The Growth ofArtificial Intelligencein Ecommerce (Infographic)

Ive had an advocacy service previously that was just a phone number and I never used it. I actually forgot to use it all year and only remembered I had it when they changed my insurance plan and I saw the paperwork again. How is HealthJoy different?Is this where chatbots come in?

Phone-based advocacy services are great but youve identified their biggest problem: no one uses them. They are cheap to provide, so a lot of companies will bundle them in with their employee benefits packages, but they have zero ROI or utilization. Our chatbot JOY is the hub for a lot of different employee benefits including advocacy. JOYs main job is to route people to higher quality, less expensive care. She is fully supported by our concierge staff here in Chicago. They do things like call doctors offices to book appointments, verify network participation and much more. Our app is extremely easy to use and has been refined over the last three years to get the maximum engagement and utilization for our members.

Related:Why Tech Companies Are Pumping Money IntoArtificial Intelligence

Ive played around with your app. You offer a lot more than just an advocacy service. I see that you can also speak with a doctor in the app.

Yes, advocacy through JOY and our concierge team really is just the glue that binds our cost saving strategies. We also integrate telemedicine within the app so an employee can speak with a doctor 24/7 for free. This is another way we save companies money. We avoid those cases where someone needs to speak with a doctor in the middle of the night for a non-emergency and ends up at the emergency room or urgent care. Avoiding one trip to the emergency room can save thousands of dollars. Telemedicine has been around for a few years but, like advocacy, getting employees to use it has always been the big issue. Since we are the first stop for employee's healthcare needs, we can redirect them to telemedicine when it fits. We actually get over 50% of our telemedicine consults from when a member is trying to do something else. For example, they might be trying to verify if a dermatologist is within their insurance plan. Well ask them if they want to take a photo of an issue and have an instant consultation with one of our doctors. This is one of the reasons that employers are now seeing utilization rates that are sometimes 18X the industry standard. Redirecting all these consultations online is a huge savings to companies.

Related:4 WaysArtificial IntelligenceBoosts Workforce Productivity

What other services do you provide within the app?

We actually offer a lot of services and its constantly growing. Employers can even integrate their existing offerings as well. Healthcare is best delivered as a conversation, and thats why our AI-powered chatbot is perfect to service such a wide variety of offerings. The great thing is that its all delivered within an app that looks no more complex than Facebook Messenger or iMessage.

Right now we do medical bill reviews and prescription drug optimization. Well find the lowest prices for a procedure, help people with their health savings account and push wellness information. Our platform is like an operating system for healthcare engagement. The more we can engage with a company's employees for their healthcare needs, the more we can save both the employer and employees money.

Related:Artificial Intelligence- A Friend or Foe for Humans

It sounds like you're trying to build the Siri of healthcare, no?

In a way, yes. Basically, we are trying to help employers reduce their healthcare costs by providing their employees with an all-in-one mobile app that promotes smart healthcare decisions. JOY will proactively engage employees, connect them with our benefits concierge team and redirect to lower-cost care options like telemedicine. We integrate each client's benefits package and wellness programs to deliver a highly personalized experience that drives real ROI and improves workplace health.

So if a company wants to launch HealthJoy to their employees, do they need to just tell them to download your app?

We distribute HealthJoy to companies exclusively through benefits advisors, who are experts in developing plan designs and benefits strategies that work, both for employees and the bottom line. We always want HealthJoy to be integrated within a thoughtful strategy that leverages the expertise the benefits advisor provides, and we rely on them to upload current benefits and plan information.

Marsha is a Growth Marketing Expertbusiness advisor and speaker with specialism in international marketing.

Read more from the original source:

This Startup Is Lowering Companies Healthcare Costs With AI - Entrepreneur

Posted in Ai | Comments Off on This Startup Is Lowering Companies Healthcare Costs With AI – Entrepreneur

Samsung’s Bixby and Why It’s So Hard to Create a Voice AI – New York Magazine

Posted: at 2:13 am

Samsungs Bixby cant hear you right now.

Its conventional wisdom within tech that voice interaction that is, talking to your phone is the future of how we interact with our gadgets, particularly voice interaction through a personal assistant like Google, Siri, Alexa, or Cortana. Samsung desperately wanted to play catch-up, and introduced its own AI agent, Bixby, alongside this years flagship phone, the Samsung Galaxy S8. The only problem? Bixby cant understand you. Or, Bixby can understand you if you speak Korean. But its English-language capabilities, like an MTA project gone bad, just keep getting pushed further and further back.

The field of voice recognition and conversational AI took a huge leap forward about five years ago, as the field of machine learning (specifically, the use of recombinant neural networks) allowed speech-recognition accuracy to leap forward. In 2013, Googles voice-recognition accuracy hovered around 75 percent, per Kleiner Perkinss Mary Meeker. Today, Googles voice recognition is at 95 percent. It did this because Google had tremendous amount of data to train its voice-recognition systems with. (Meeker also says about 20 percent of queries were made by voice, showing why Samsung may be anxious to get Bixby up and running.)

Both Google and Amazon allow their assistants to train against a users own voice, learning a particular persons quirks and regional variations in speech. Even Apple, which has significantly lagged behind the competition, has improved its voice recognition (even if Siri itself can be frustratingly dense about what to do with those voice queries). But even these voice assistants require you to speak clearly with significant pauses between words and clear enunciation. Blur your words together quickly like you do in colloquial speech, and these systems which have collectively thousands of very, very smart people working on them can still be thrown for a loop.

Meanwhile, theres Samsung. A spokesperson for the company, speaking to the Korea Herald, says, Developing Bixby in other languages is taking more time than we expected mainly because of the lack of the accumulation of big data. Google, Amazon, and Apple all have vast libraries of speech to fall back on, and Google in particular has its search engine to simulate the appearance of real depth (even if it can be badly led astray).

None of this is to bag on Samsung. The company is the second-largest manufacturer of cell phones in the world, and its Galaxy smartphones were briefly outselling the iPhone in 2016. Its also an enormous company, of which cell phones are but one of its many going concerns. (Nobody expects Google to turn out washing machines, or Apple to make a vacuum cleaner.) But the table stakes in the world of voice recognition and AI agents are so tremendously high, its hard to see how any company even one as large as Samsung will be able to break through.

Not that thats deterring Samsung. Its reportedly already planning to bring its own Echo competitor to market, code-named the Vega. Its easy to see this as Samsungs reach exceeding its grasp why bring a product to market when you cant even get your phones to understand English? but theres a good reason why Samsung may be forging ahead. Even if it cant rack up the sales numbers the Echo has seen, itll at least get a few more people talking to Samsung and helping it build up its own store of voice data to train against.

Uber tries to play nice.

According to a new report, 20 women spoke about their experiences with sexual harassment at Tesla during a town-hall meeting.

Agata Kornhauser-Duda avoided shaking the presidents hand like a pro.

Wow! Sending and receiving video and photo messages that disappear? How original!

The cheapest bottle from Amazons new wine line retails for $20.

Case closed.

With great power comes great responsibility to make funny Vines.

Dont ask a Chipotle employee to package all your ingredients individually. Just dont do it.

Now you can use voice filters on everything.

Lost in translation.

NPR tweeted the document in honor of the Fourth of July, and the responses are gold.

There isnt really a middle ground between sober and objective news organization and righteous internet vigilante gang.

One of the top Apple analysts says Apples latest and greatest wont have any fingerprint scanner at all.

Having a like button on a website doesnt count as wiretapping.

Facebook is making a slight tweak using a simple metric.

Swarm Simulator is nothing but text, buttons, and watching big numbers get bigger. Its awesome.

Follow this link:

Samsung's Bixby and Why It's So Hard to Create a Voice AI - New York Magazine

Posted in Ai | Comments Off on Samsung’s Bixby and Why It’s So Hard to Create a Voice AI – New York Magazine

Prisma’s next AI project is a fun selfie sticker maker called Sticky … – TechCrunch

Posted: at 2:13 am

What do you do after garnering tens of millions of downloads and scores of clones of your AI-powered style transfer app? Why, keep innovating of course.

Meet Sticky, the next app from the startup behindPrisma, which turns selfies into stylized and/or animated stickers for sharing to your social feeds. Sticky is launching today on iOS, with an Android version due in a week or two.

While Prisma gained viral popularity last year, netting its Moscow-based makers around 70 million downloads in a matter of months, its core feature has been rapidly andwidely copied including by social goliaths like Facebook.

The teams response to having their USP eaten alive by others algorithms was to evolve their cool tool into a platform.But with the social app space essentially sewn up (at least in the West) by Facebook, which also owns Instagram and WhatsApp, building momentum and making a lasting impression as a new platform is clearly not an easy task.

Co-founder AramAirapetyan tells us Prismas audience has been very stable for the last six months shaking out to around 10 million monthly active users.

Thats not bad for a ~one-year-old app. But, well, Facebook has two billion monthly users at this point (And thats before you factor in all the Instagram and WhatsApp users.) So its hardly a fair fight.

Still, Prismas team isnt sitting still. Their next app project also applies neural networks to another photo-focused task this time creating selfie stickers for social sharing to messaging platforms such as WhatsApp, WeChat, Apples iMessage and Telegram.

Stickys core tech is an auto cut-out feature that quickly extracts your selfie from whatever background you snapped against so that it can be repurposed into sharable social currency as a standalone sticker.

We trained neural networks to find different objects on a photo/ video and even on a live video stream. So basically our trained neural networks are looking for a person on a photo. Thats all we need. Then we cut out the background and the sticker is ready, explains Airapetyan, describing it as a very complex tech behind an easy user experience.

The app lets you leave your cut-out selfie without any background, or edit the background lightly by tapping through a few full-fill colors options to make the sticker a bit more visually impactful. You can also add a white border around your selfie for extra stickerish delineation.

Airapetyan says more options are planned on the background front in future including the ability to superimpose selfie stickers over photos of your choice.

Its fair to say that, at this MVP stage, the cut-out feature is by no means perfect. It can get very confused by hair, for instance. And certain (high or low) lightning conditions can easily result in bits of your cheek going missing. But with a bit of trial and error you can get a reasonable result and without having to spend much time on it.

Also worth noting: all processing is done locally on the device, according toAirapetyan.

From here, Sticky shows its Prisma pedigree as you can tap on your cut-out selfie to apply a Prisma-ish style transfer effect (the version I tested had two style options, a black and white and a color style, but the plan is to add lots more cool comic and cartoon-like styles, says Airapetyan).

You can further augment your sticker by adding a text caption too, if you wish.

When youre happy with your creation you can save it or share to your social feeds although at this stage stickers generally share as a picture, rather than a sticker format (but the team is hoping to get support for that and says Telegram and WeChat are working to provide APIs).

Saved stickers are stored as an ongoing, editable collection within the app.

As well as still selfies, Sticky also lets you create animated stickers. To do this, instead of tapping once to snap a selfie you hold down on the camera button while pulling your silly face (or what not) and the app snaps multiple frames and processes these into an animation.

Animated Sticky stickers are displayed in WhatsApp as a GIF with a play button (but loop continuously when viewed in your Sticky sticker collection).

For the time-being, not all the messengers have API for native sticker sharing, notes Airapetyan. Thats why, for example, your sticker is shared like a picture to WhatsApp, or like a GIF if its animated.

He also concedes the cut-out tech is a little rough-round the edges at this point but says it will improve the more people use it given the algorithms are learning from the data.

Sometimes thecut-out tech isnt perfect, but the more people will use Sticky, the better it will become itself! he says. Thats the best thing about the tech. We also work hard to improve it! For example, we can let people create stickers with their pets in hands.

Sticky is surely going to become a better app with lots of more features. We just need to find out what people need first. Stickers, in general, are very popular nowadays and the popularity will spiral up, for sure, he adds.

The app is a free download, and the team isnt even thinking about monetization at this point. We just focus on the product right now, saysAirapetyan.

See original here:

Prisma's next AI project is a fun selfie sticker maker called Sticky ... - TechCrunch

Posted in Ai | Comments Off on Prisma’s next AI project is a fun selfie sticker maker called Sticky … – TechCrunch

AI is changing how we do science. Get a glimpse – Science Magazine

Posted: July 5, 2017 at 11:12 pm

By Science News StaffJul. 5, 2017 , 11:00 AM

Particle physicists began fiddling with artificial intelligence (AI) in the late 1980s, just as the term neural network captured the publics imagination. Their field lends itself to AI and machine-learning algorithms because nearly every experiment centers on finding subtle spatial patterns in the countless, similar readouts of complex particle detectorsjust the sort of thing at which AI excels. It took us several years to convince people that this is not just some magic, hocus-pocus, black box stuff, says Boaz Klima, of Fermi National Accelerator Laboratory (Fermilab) in Batavia, Illinois, one of the first physicists to embrace the techniques. Now, AI techniques number among physicists standard tools.

Neural networks search for fingerprints of new particles in the debris of collisions at the LHC.

2012 CERN, FOR THE BENEFIT OF THE ALICE COLLABORATION

Particle physicists strive to understand the inner workings of the universe by smashing subatomic particles together with enormous energies to blast out exotic new bits of matter. In 2012, for example, teams working with the worlds largest proton collider, the Large Hadron Collider (LHC) in Switzerland, discovered the long-predicted Higgs boson, the fleeting particle that is the linchpin to physicists explanation of how all other fundamental particles get their mass.

Such exotic particles dont come with labels, however. At the LHC, a Higgs boson emerges from roughly one out of every 1 billion proton collisions, and within a billionth of a picosecond it decays into other particles, such as a pair of photons or a quartet of particles called muons. To reconstruct the Higgs, physicists must spot all those more-common particles and see whether they fit together in a way thats consistent with them coming from the same parenta job made far harder by the hordes of extraneous particles in a typical collision.

Algorithms such as neural networks excel in sifting signal from background, says Pushpalatha Bhat, a physicist at Fermilab. In a particle detectorusually a huge barrel-shaped assemblage of various sensorsa photon typically creates a spray of particles or shower in a subsystem called an electromagnetic calorimeter. So do electrons and particles called hadrons, but their showers differ subtly from those of photons. Machine-learning algorithms can tell the difference by sniffing out correlations among the multiple variables that describe the showers. Such algorithms can also, for example, help distinguish the pairs of photons that originate from a Higgs decay from random pairs. This is the proverbial needle-in-the-haystack problem, Bhat says. Thats why its so important to extract the most information we can from the data.

Machine learning hasnt taken over the field. Physicists still rely mainly on their understanding of the underlying physics to figure out how to search data for signs of new particles and phenomena. But AI is likely to become more important, says Paolo Calafiura, a computer scientist at Lawrence Berkeley National Laboratory in Berkeley, California. In 2024, researchers plan to upgrade the LHC to increase its collision rate by a factor of 10. At that point, Calafiura says, machine learning will be vital for keeping up with the torrent of data. Adrian Cho

With billions of users and hundreds of billions of tweets and posts every year, social media has brought big data to social science. It has also opened an unprecedented opportunity to use artificial intelligence (AI) to glean meaning from the mass of human communications, psychologist Martin Seligman has recognized. At the University of Pennsylvanias Positive Psychology Center, he and more than 20 psychologists, physicians, and computer scientists in the World Well-Being Project use machine learning and natural language processing to sift through gobs of data to gauge the publics emotional and physical health.

Thats traditionally done with surveys. But social media data are unobtrusive, its very inexpensive, and the numbers you get are orders of magnitude greater, Seligman says. It is also messy, but AI offers a powerful way to reveal patterns.

In one recent study, Seligman and his colleagues looked at the Facebook updates of 29,000 users who had taken a self-assessment of depression. Using data from 28,000 of the users, a machine-learning algorithm found associations between words in the updates and depression levels. It could then successfully gauge depression in the other users based only on their updates.

In another study, the team predicted county-level heart disease mortality rates by analyzing 148 million tweets; words related to anger and negative relationships turned out to be risk factors. The predictions from social media matched actual mortality rates more closely than did predictions based on 10 leading risk factors, such as smoking and diabetes. The researchers have also used social media to predict personality, income, and political ideology, and to study hospital care, mystical experiences, and stereotypes. The team has even created a map coloring each U.S. county according to well-being, depression, trust, and five personality traits, as inferred from Twitter.

Theres a revolution going on in the analysis of language and its links to psychology, says James Pennebaker, a social psychologist at the University of Texas in Austin. He focuses not on content but style, and has found, for example, that the use of function words in a college admissions essay can predict grades. Articles and prepositions indicate analytical thinking and predict higher grades; pronouns and adverbs indicate narrative thinking and predict lower grades. He also found support for suggestions that much of the 1728 play Double Falsehood was likely written by William Shakespeare: Machine-learning algorithms matched it to Shakespeares other works based on factors such as cognitive complexity and rare words. Now, we can analyze everything that youve ever posted, ever written, and increasingly how you and Alexa talk, Pennebaker says. The result: richer and richer pictures of who people are. Matthew Hutson

For geneticists, autism is a vexing challenge. Inheritance patterns suggest it has a strong genetic component. But variants in scores of genes known to play some role in autism can explain only about 20% of all cases. Finding other variants that might contribute requires looking for clues in data on the 25,000 other human genes and their surrounding DNAan overwhelming task for human investigators. So computational biologist Olga Troyanskaya of Princeton University and the Simons Foundation in New York City enlisted the tools of artificial intelligence (AI).

Artificial intelligence tools are helping reveal thousands of genes that may contribute to autism.

BSIP SA/ALAMY STOCK PHOTO

We can only do so much as biologists to show what underlies diseases like autism, explains collaborator Robert Darnell, founding director of the New York Genome Center and a physician scientist at The Rockefeller University in New York City. The power of machines to ask a trillion questions where a scientist can ask just 10 is a game-changer.

Troyanskaya combined hundreds of data sets on which genes are active in specific human cells, how proteins interact, and where transcription factor binding sites and other key genome features are located. Then her team used machine learning to build a map of gene interactions and compared those of the few well-established autism risk genes with those of thousands of other unknown genes, looking for similarities. That flagged another 2500 genes likely to be involved in autism, they reported last year in Nature Neuroscience.

But genes dont act in isolation, as geneticists have recently realized. Their behavior is shaped by the millions of nearby noncoding bases, which interact with DNA-binding proteins and other factors. Identifying which noncoding variants might affect nearby autism genes is an even tougher problem than finding the genes in the first place, and graduate student Jian Zhou in Troyanskayas Princeton lab is deploying AI to solve it.

To train the programa deep-learning systemZhou exposed it to data collected by the Encyclopedia of DNA Elements and Roadmap Epigenomics, two projects that cataloged how tens of thousands of noncoding DNA sites affect neighboring genes. The system in effect learned which features to look for as it evaluates unknown stretches of noncoding DNA for potential activity.

When Zhou and Troyanskaya described their program, called DeepSEA, in Nature Methods in October 2015, Xiaohui Xie, a computer scientist at the University of California, Irvine, called it a milestone in applying deep learning to genomics. Now, the Princeton team is running the genomes of autism patients through DeepSEA, hoping to rank the impacts of noncoding bases.

Xie is also applying AI to the genome, though with a broader focus than autism. He, too, hopes to classify any mutations by the odds they are harmful. But he cautions that in genomics, deep learning systems are only as good as the data sets on which they are trained. Right now I think people are skeptical that such systems can reliably parse the genome, he says. But I think down the road more and more people will embrace deep learning. Elizabeth Pennisi

This past April, astrophysicist Kevin Schawinski posted fuzzy pictures of four galaxies on Twitter, along with a request: Could fellow astronomers help him classify them? Colleagues chimed in to say the images looked like ellipticals and spiralsfamiliar species of galaxies.

Some astronomers, suspecting trickery from the computation-minded Schawinski, asked outright: Were these real galaxies? Or were they simulations, with the relevant physics modeled on a computer? In truth they were neither, he says. At ETH Zurich in Switzerland, Schawinski, computer scientist Ce Zhang, and other collaborators had cooked the galaxies up inside a neural network that doesnt know anything about physics. It just seems to understand, on a deep level, how galaxies should look.

With his Twitter post, Schawinski just wanted to see how convincing the networks creations were. But his larger goal was to create something like the technology in movies that magically sharpens fuzzy surveillance images: a network that could make a blurry galaxy image look like it was taken by a better telescope than it actually was. That could let astronomers squeeze out finer details from reams of observations. Hundreds of millions or maybe billions of dollars have been spent on sky surveys, Schawinski says. With this technology we can immediately extract somewhat more information.

The forgery Schawinski posted on Twitter was the work of a generative adversarial network, a kind of machine-learning model that pits two dueling neural networks against each other. One is a generator that concocts images, the other a discriminator that tries to spot any flaws that would give away the manipulation, forcing the generator to get better. Schawinskis team took thousands of real images of galaxies, and then artificially degraded them. Then the researchers taught the generator to spruce up the images again so they could slip past the discriminator. Eventually the network could outperform other techniques for smoothing out noisy pictures of galaxies.

AI that knows what a galaxy should look like transforms a fuzzy image (left) into a crisp one (right).

KIYOSHI TAKAHASE SEGUNDO/ALAMY STOCK PHOTO

Schawinskis approach is a particularly avant-garde example of machine learning in astronomy, says astrophysicist Brian Nord of Fermi National Accelerator Laboratory in Batavia, Illinois, but its far from the only one. At the January meeting of the American Astronomical Society, Nord presented a machine-learning strategy to hunt down strong gravitational lenses: rare arcs of light in the sky that form when the images of distant galaxies travel through warped spacetime on the way to Earth. These lenses can be used to gauge distances across the universe and find unseen concentrations of mass.

Strong gravitational lenses are visually distinctive but difficult to describe with simple mathematical ruleshard for traditional computers to pick out, but easy for people. Nord and others realized that a neural network, trained on thousands of lenses, can gain similar intuition. In the following months, there have been almost a dozen papers, actually, on searching for strong lenses using some kind of machine learning. Its been a flurry, Nord says.

And its just part of a growing realization across astronomy that artificial intelligence strategies offer a powerful way to find and classify interesting objects in petabytes of data. To Schawinski, Thats one way I think in which real discovery is going to be made in this age of Oh my God, we have too much data. Joshua Sokol

Organic chemists are experts at working backward. Like master chefs who start with a vision of the finished dish and then work out how to make it, many chemists start with the final structure of a molecule they want to make, and then think about how to assemble it. You need the right ingredients and a recipe for how to combine them, says Marwin Segler, a graduate student at the University of Mnster in Germany. He and others are now bringing artificial intelligence (AI) into their molecular kitchens.

They hope AI can help them cope with the key challenge of moleculemaking: choosing from among hundreds of potential building blocks and thousands of chemical rules for linking them. For decades, some chemists have painstakingly programmed computers with known reactions, hoping to create a system that could quickly calculate the most facile molecular recipes. However, Segler says, chemistry can be very subtle. Its hard to write down all the rules in a binary way.

So Segler, along with computer scientist Mike Preuss at Mnster and Seglers adviser Mark Waller, turned to AI. Instead of programming in hard and fast rules for chemical reactions, they designed a deep neural network program that learns on its own how reactions proceed, from millions of examples. The more data you feed it the better it gets, Segler says. Over time the network learned to predict the best reaction for a desired step in a synthesis. Eventually it came up with its own recipes for making molecules from scratch.

The trio tested the program on 40 different molecular targets, comparing it with a conventional molecular design program. Whereas the conventional program came up with a solution for synthesizing target molecules 22.5% of the time in a 2-hour computing window, the AI figured it out 95% of the time, they reported at a meeting this year. Segler, who will soon move to London to work at a pharmaceutical company, hopes to use the approach to improve the production of medicines.

Paul Wender, an organic chemist at Stanford University in Palo Alto, California, says its too soon to know how well Seglers approach will work. But Wender, who is also applying AI to synthesis, thinks it could have a profound impact, not just in building known molecules but in finding ways to make new ones. Segler adds that AI wont replace organic chemists soon, because they can do far more than just predict how reactions will proceed. Like a GPS navigation system for chemistry, AI may be good for finding a route, but it cant design and carry out a full synthesisby itself.

Of course, AI developers have their eyes trained on those other tasks as well. Robert F. Service

The rest is here:

AI is changing how we do science. Get a glimpse - Science Magazine

Posted in Ai | Comments Off on AI is changing how we do science. Get a glimpse – Science Magazine

Page 225«..1020..224225226227..230240..»