AI is changing how we do science. Get a glimpse – Science Magazine

Posted: July 5, 2017 at 11:12 pm

By Science News StaffJul. 5, 2017 , 11:00 AM

Particle physicists began fiddling with artificial intelligence (AI) in the late 1980s, just as the term neural network captured the publics imagination. Their field lends itself to AI and machine-learning algorithms because nearly every experiment centers on finding subtle spatial patterns in the countless, similar readouts of complex particle detectorsjust the sort of thing at which AI excels. It took us several years to convince people that this is not just some magic, hocus-pocus, black box stuff, says Boaz Klima, of Fermi National Accelerator Laboratory (Fermilab) in Batavia, Illinois, one of the first physicists to embrace the techniques. Now, AI techniques number among physicists standard tools.

Neural networks search for fingerprints of new particles in the debris of collisions at the LHC.

2012 CERN, FOR THE BENEFIT OF THE ALICE COLLABORATION

Particle physicists strive to understand the inner workings of the universe by smashing subatomic particles together with enormous energies to blast out exotic new bits of matter. In 2012, for example, teams working with the worlds largest proton collider, the Large Hadron Collider (LHC) in Switzerland, discovered the long-predicted Higgs boson, the fleeting particle that is the linchpin to physicists explanation of how all other fundamental particles get their mass.

Such exotic particles dont come with labels, however. At the LHC, a Higgs boson emerges from roughly one out of every 1 billion proton collisions, and within a billionth of a picosecond it decays into other particles, such as a pair of photons or a quartet of particles called muons. To reconstruct the Higgs, physicists must spot all those more-common particles and see whether they fit together in a way thats consistent with them coming from the same parenta job made far harder by the hordes of extraneous particles in a typical collision.

Algorithms such as neural networks excel in sifting signal from background, says Pushpalatha Bhat, a physicist at Fermilab. In a particle detectorusually a huge barrel-shaped assemblage of various sensorsa photon typically creates a spray of particles or shower in a subsystem called an electromagnetic calorimeter. So do electrons and particles called hadrons, but their showers differ subtly from those of photons. Machine-learning algorithms can tell the difference by sniffing out correlations among the multiple variables that describe the showers. Such algorithms can also, for example, help distinguish the pairs of photons that originate from a Higgs decay from random pairs. This is the proverbial needle-in-the-haystack problem, Bhat says. Thats why its so important to extract the most information we can from the data.

Machine learning hasnt taken over the field. Physicists still rely mainly on their understanding of the underlying physics to figure out how to search data for signs of new particles and phenomena. But AI is likely to become more important, says Paolo Calafiura, a computer scientist at Lawrence Berkeley National Laboratory in Berkeley, California. In 2024, researchers plan to upgrade the LHC to increase its collision rate by a factor of 10. At that point, Calafiura says, machine learning will be vital for keeping up with the torrent of data. Adrian Cho

With billions of users and hundreds of billions of tweets and posts every year, social media has brought big data to social science. It has also opened an unprecedented opportunity to use artificial intelligence (AI) to glean meaning from the mass of human communications, psychologist Martin Seligman has recognized. At the University of Pennsylvanias Positive Psychology Center, he and more than 20 psychologists, physicians, and computer scientists in the World Well-Being Project use machine learning and natural language processing to sift through gobs of data to gauge the publics emotional and physical health.

Thats traditionally done with surveys. But social media data are unobtrusive, its very inexpensive, and the numbers you get are orders of magnitude greater, Seligman says. It is also messy, but AI offers a powerful way to reveal patterns.

In one recent study, Seligman and his colleagues looked at the Facebook updates of 29,000 users who had taken a self-assessment of depression. Using data from 28,000 of the users, a machine-learning algorithm found associations between words in the updates and depression levels. It could then successfully gauge depression in the other users based only on their updates.

In another study, the team predicted county-level heart disease mortality rates by analyzing 148 million tweets; words related to anger and negative relationships turned out to be risk factors. The predictions from social media matched actual mortality rates more closely than did predictions based on 10 leading risk factors, such as smoking and diabetes. The researchers have also used social media to predict personality, income, and political ideology, and to study hospital care, mystical experiences, and stereotypes. The team has even created a map coloring each U.S. county according to well-being, depression, trust, and five personality traits, as inferred from Twitter.

Theres a revolution going on in the analysis of language and its links to psychology, says James Pennebaker, a social psychologist at the University of Texas in Austin. He focuses not on content but style, and has found, for example, that the use of function words in a college admissions essay can predict grades. Articles and prepositions indicate analytical thinking and predict higher grades; pronouns and adverbs indicate narrative thinking and predict lower grades. He also found support for suggestions that much of the 1728 play Double Falsehood was likely written by William Shakespeare: Machine-learning algorithms matched it to Shakespeares other works based on factors such as cognitive complexity and rare words. Now, we can analyze everything that youve ever posted, ever written, and increasingly how you and Alexa talk, Pennebaker says. The result: richer and richer pictures of who people are. Matthew Hutson

For geneticists, autism is a vexing challenge. Inheritance patterns suggest it has a strong genetic component. But variants in scores of genes known to play some role in autism can explain only about 20% of all cases. Finding other variants that might contribute requires looking for clues in data on the 25,000 other human genes and their surrounding DNAan overwhelming task for human investigators. So computational biologist Olga Troyanskaya of Princeton University and the Simons Foundation in New York City enlisted the tools of artificial intelligence (AI).

Artificial intelligence tools are helping reveal thousands of genes that may contribute to autism.

BSIP SA/ALAMY STOCK PHOTO

We can only do so much as biologists to show what underlies diseases like autism, explains collaborator Robert Darnell, founding director of the New York Genome Center and a physician scientist at The Rockefeller University in New York City. The power of machines to ask a trillion questions where a scientist can ask just 10 is a game-changer.

Troyanskaya combined hundreds of data sets on which genes are active in specific human cells, how proteins interact, and where transcription factor binding sites and other key genome features are located. Then her team used machine learning to build a map of gene interactions and compared those of the few well-established autism risk genes with those of thousands of other unknown genes, looking for similarities. That flagged another 2500 genes likely to be involved in autism, they reported last year in Nature Neuroscience.

But genes dont act in isolation, as geneticists have recently realized. Their behavior is shaped by the millions of nearby noncoding bases, which interact with DNA-binding proteins and other factors. Identifying which noncoding variants might affect nearby autism genes is an even tougher problem than finding the genes in the first place, and graduate student Jian Zhou in Troyanskayas Princeton lab is deploying AI to solve it.

To train the programa deep-learning systemZhou exposed it to data collected by the Encyclopedia of DNA Elements and Roadmap Epigenomics, two projects that cataloged how tens of thousands of noncoding DNA sites affect neighboring genes. The system in effect learned which features to look for as it evaluates unknown stretches of noncoding DNA for potential activity.

When Zhou and Troyanskaya described their program, called DeepSEA, in Nature Methods in October 2015, Xiaohui Xie, a computer scientist at the University of California, Irvine, called it a milestone in applying deep learning to genomics. Now, the Princeton team is running the genomes of autism patients through DeepSEA, hoping to rank the impacts of noncoding bases.

Xie is also applying AI to the genome, though with a broader focus than autism. He, too, hopes to classify any mutations by the odds they are harmful. But he cautions that in genomics, deep learning systems are only as good as the data sets on which they are trained. Right now I think people are skeptical that such systems can reliably parse the genome, he says. But I think down the road more and more people will embrace deep learning. Elizabeth Pennisi

This past April, astrophysicist Kevin Schawinski posted fuzzy pictures of four galaxies on Twitter, along with a request: Could fellow astronomers help him classify them? Colleagues chimed in to say the images looked like ellipticals and spiralsfamiliar species of galaxies.

Some astronomers, suspecting trickery from the computation-minded Schawinski, asked outright: Were these real galaxies? Or were they simulations, with the relevant physics modeled on a computer? In truth they were neither, he says. At ETH Zurich in Switzerland, Schawinski, computer scientist Ce Zhang, and other collaborators had cooked the galaxies up inside a neural network that doesnt know anything about physics. It just seems to understand, on a deep level, how galaxies should look.

With his Twitter post, Schawinski just wanted to see how convincing the networks creations were. But his larger goal was to create something like the technology in movies that magically sharpens fuzzy surveillance images: a network that could make a blurry galaxy image look like it was taken by a better telescope than it actually was. That could let astronomers squeeze out finer details from reams of observations. Hundreds of millions or maybe billions of dollars have been spent on sky surveys, Schawinski says. With this technology we can immediately extract somewhat more information.

The forgery Schawinski posted on Twitter was the work of a generative adversarial network, a kind of machine-learning model that pits two dueling neural networks against each other. One is a generator that concocts images, the other a discriminator that tries to spot any flaws that would give away the manipulation, forcing the generator to get better. Schawinskis team took thousands of real images of galaxies, and then artificially degraded them. Then the researchers taught the generator to spruce up the images again so they could slip past the discriminator. Eventually the network could outperform other techniques for smoothing out noisy pictures of galaxies.

AI that knows what a galaxy should look like transforms a fuzzy image (left) into a crisp one (right).

KIYOSHI TAKAHASE SEGUNDO/ALAMY STOCK PHOTO

Schawinskis approach is a particularly avant-garde example of machine learning in astronomy, says astrophysicist Brian Nord of Fermi National Accelerator Laboratory in Batavia, Illinois, but its far from the only one. At the January meeting of the American Astronomical Society, Nord presented a machine-learning strategy to hunt down strong gravitational lenses: rare arcs of light in the sky that form when the images of distant galaxies travel through warped spacetime on the way to Earth. These lenses can be used to gauge distances across the universe and find unseen concentrations of mass.

Strong gravitational lenses are visually distinctive but difficult to describe with simple mathematical ruleshard for traditional computers to pick out, but easy for people. Nord and others realized that a neural network, trained on thousands of lenses, can gain similar intuition. In the following months, there have been almost a dozen papers, actually, on searching for strong lenses using some kind of machine learning. Its been a flurry, Nord says.

And its just part of a growing realization across astronomy that artificial intelligence strategies offer a powerful way to find and classify interesting objects in petabytes of data. To Schawinski, Thats one way I think in which real discovery is going to be made in this age of Oh my God, we have too much data. Joshua Sokol

Organic chemists are experts at working backward. Like master chefs who start with a vision of the finished dish and then work out how to make it, many chemists start with the final structure of a molecule they want to make, and then think about how to assemble it. You need the right ingredients and a recipe for how to combine them, says Marwin Segler, a graduate student at the University of Mnster in Germany. He and others are now bringing artificial intelligence (AI) into their molecular kitchens.

They hope AI can help them cope with the key challenge of moleculemaking: choosing from among hundreds of potential building blocks and thousands of chemical rules for linking them. For decades, some chemists have painstakingly programmed computers with known reactions, hoping to create a system that could quickly calculate the most facile molecular recipes. However, Segler says, chemistry can be very subtle. Its hard to write down all the rules in a binary way.

So Segler, along with computer scientist Mike Preuss at Mnster and Seglers adviser Mark Waller, turned to AI. Instead of programming in hard and fast rules for chemical reactions, they designed a deep neural network program that learns on its own how reactions proceed, from millions of examples. The more data you feed it the better it gets, Segler says. Over time the network learned to predict the best reaction for a desired step in a synthesis. Eventually it came up with its own recipes for making molecules from scratch.

The trio tested the program on 40 different molecular targets, comparing it with a conventional molecular design program. Whereas the conventional program came up with a solution for synthesizing target molecules 22.5% of the time in a 2-hour computing window, the AI figured it out 95% of the time, they reported at a meeting this year. Segler, who will soon move to London to work at a pharmaceutical company, hopes to use the approach to improve the production of medicines.

Paul Wender, an organic chemist at Stanford University in Palo Alto, California, says its too soon to know how well Seglers approach will work. But Wender, who is also applying AI to synthesis, thinks it could have a profound impact, not just in building known molecules but in finding ways to make new ones. Segler adds that AI wont replace organic chemists soon, because they can do far more than just predict how reactions will proceed. Like a GPS navigation system for chemistry, AI may be good for finding a route, but it cant design and carry out a full synthesisby itself.

Of course, AI developers have their eyes trained on those other tasks as well. Robert F. Service

The rest is here:

AI is changing how we do science. Get a glimpse - Science Magazine

Related Posts