Role of AI soars in tackling Covid-19 pandemic – BusinessLine

For the first time in a pandemic, Artificial Intelligence (AI) is playing a role like never before in areas ranging from diagnosing risk to doubt-clearing, from delivery of services to drug discovery in tackling the Covid-19 outbreak.

While BlueDoT, a Canadian health monitoring firm that crunches flight data and news reports using AI, is being credited by international reports to be the first to warn its clients of an impending outbreak on December 31, beating countries and international developmental agencies, the Indian tech space too is buzzing with coronavirus cracking activities.

CoRover, a start-up in the AI space that has earlier developed chatbots for railways ticketing platform, has now created a video-bot by collaborating with a doctor from Fortis Healthcare. In this platform, a real doctor from Fortis Healthcare not a cartoon or an invisible knowledge bank will take questions from people about Covid-19.

Apollo Hospitals has come up with a risk assessment scanner for Covid-19, which is available in six languages and guides people about the potential risk of having the virus. The Jaipur-based Sawai Man Singh Hospital is trying out a robot, made by robot maker Club First, to serve food and medicines to patients to lower the exposure of health workers to coronavirus patients.

This is the first time in healthcare that Artificial Intelligence, Machine Learning, and Natural Language Processing are being used to create a Virtual Conversational AI platform, which assists anyone to be able to interact with doctors and have their queries answered unlike other search engines, which do not guarantee the authenticity of information, CoRovers Ankush Sabharwal claimed, while talking of its video-bot, which is likely to be launched soon.

Sabharwal told BusinessLine that answers to numerous questions have been recorded by Pratik Yashavant Patil, a doctor from Fortis Healthcare. In his AI avatar, Doctor Patil will bust myths, chat with you and will probably have answers to a lot of your questions.

Another start-up, Innoplexus AG, headquartered in Germany but founded by Indians, is claiming that its AI-enabled drug discovery platform is helping to arrive at combinations of existing drugs that may prove more efficacious in treating Covid-19 cases.

Its AI platform, after scanning the entire universe of Covid-related data has thrown up results to show that Hydroxycholoroquine or Chroloquine, an anti-malaria drug that is being prescribed as a prophylactic for coronavirus under many protocols works more effectively with some other existing drugs than when it is used alone, the company claims.

Our analysis shows that Chloroquine works more effectively in combination with Pegasys (a drug used to treat Hepatitis C] or Tocilizumab, (a rheumatoid arthritis drug) or Remdesivir (yet to be approved antiviral drug for Ebola) or Clarithromycin (an antibiotic). We are hoping to work with drug regulators and partners to test these in pre-clinical and clinical trials, said Gunjan Bhardwaj, CEO, Innoplexus.

To be sure, hundreds of clinical trials are currently under way with several cocktails of medicines for Covid-19 across the world, and some of these drugs were part of trials held in China and Taiwan. The World Health Organization (WHO) itself is monitoring a global mega clinical trial for testing drugs for Covid-19 called solidarity, which India decided to join on Friday.

Read the rest here:

Role of AI soars in tackling Covid-19 pandemic - BusinessLine

RSA: Eric Schmidt shares deep learning on AI – CIO

By David Needle

CIO | Feb 16, 2017 3:05 PM PT

Your message has been sent.

There was an error emailing this page.

SAN FRANCISCO Alphabet chairman Eric Schmidt says artificial intelligence is key to advances in diverse areas such as healthcare and datacenter design and that security concerns related to it are somewhat misguided. (Alphabet is the parent company of Google).

In a wide-ranging on-stage conversation here at the RSA Security conference with Gideon Lewis-Kraus, author of The Great A.I. Awakening, Schmidt shared his insights from decades of work related to AI (he studied AI as a PhD student 40 years ago) and why the technology seems to finally be hitting its stride.

In fact, last year Google CEO Sundar Pichai said AI is what helps the search giant build better products over time. "We will move from a mobile-first to an AI-first world, he said.

[ Why Googles Sergey Brin changed his tune on AI ]

Asked about that, Schmidt said that Google is still very much focused on mobile advances. Going from mobile first to AI first doesnt mean you stop doing one of those, he said.

Googles approach to AI is to take the algorithms it develops and apply them to business problems. AI works best when it has a lot of training data to learn from, he said. For example, Google used AI to develop picture search, using computer vision and training the system to recognize the difference between a gazelle and a lion after showing it thousands of pictures of each. That same mechanism applies to many things, he said.

As for business problems, Schmidt said Googles top engineers work to make their data centers as efficient as possible. But using AI weve been able to get a 15 percent improvement in power use.

In healthcare, Schmidt said machine learning can help with medical diagnosis and predict the best course of treatment. Were at the point where if you have numeric sequence, (AI software) can predict what the following number will be. Thats healthcare. People go to the hospital to find out whats going to happen next and we have small projects that I think show it can be done (using AI).

Schmidt said because computer vision technology is much better than human vision it can review millions of pictures far beyond what a human being could process to better identify problem areas. Speech recognition systems are also capable of understanding far more than humans do. But these are tools, he said, for humans to leverage. Computers have vision and speech, thats not the same as AI, he said.

Lewis-Kraus addresses fears that if AI systems become self-aware they could threaten humanity. The work in AI going on now is doing pretty much what we think its supposed to do. At what point can the system self-modify? Thats worth a discussion, but we are nowhere near any of those stages, were still in baby steps, said Schmidt. You have to think in terms of ten, 20 or 30 years . Were not facing any danger now.

Schmidt also raised concern that security fears and other factors could lead governments to limit access to the internet as countries such as China already do. I am extremely worried about the likelihood countries will block the openness and interconnectedness we have today. I wrote a book on it (The New Digital Age), he said.

I fear the security breaches and attacks on the internet will be used as a pretext to shut down access, Schmidt said, adding he would like to see governments come to an agreement and mechanisms to keep access to the Internet open. In the area of AI he wants to see the industry push to make sure research stays out in the open and not controlled by military labs.

Addressing the hall packed with security professionals, Schmidt made the case for open research, noting that historically companies never want to share anything about their research. Weve taken opposite view to build a large ecosystem that is completely transparent because it will get fixed faster, he said. Maybe there are some weaknesses, but I would rather do it that way because there are thousands of you who will help plug it.

Security is not one layer. Nave engineers say they can build a better firewall, but thats not really how things work . If you build a system that is perfect and closed, you will find out its neither perfect or closed.

Follow everything from CIO

Sponsored Links

Here is the original post:

RSA: Eric Schmidt shares deep learning on AI - CIO

Curi Bio Dips into AI with Acquisition of Dana Solutions – Medical Device and Diagnostics Industry

Curi Bio said it has acquired Dana Solutions, a company that specializes in the application of artificial intelligence and machine learning to in vitro cell-based assays. The deal was for an undisclosed sum.

Seattle, WA-based Curi will gain access to Danas AI/ML-based platforms including PhenoLearn, a deep learning platform for modeling cell and tissue phenotypes; Pulse, an automated platform for contractility analysis of beating cardiomyocytes; and PhenoTox, a deep learning platform for predictive safety pharmacology.

Curis human iPSC-based platforms help drug developers build predictive and mature human iPSC tissuesespecially for the discovery, safety testing, and efficacy testing of new therapeuticswith a focus on cardiac, skeletal muscle, and neuromuscular disease models. Curi seeks to de-risk and expedite the development of new drugs by providing human-relevant preclinical data and decreasing the industrys dependence on animal models, which often fail to translate to humans.

Curi Bio is developing human-relevant platforms integrating human cells, systems, and data to accelerate the discovery of new medicines, Curi CEO Michael Cho. With the acquisition of Danas AI/ML technologies for cell-based assays, Curi is now uniquely positioned to offer pharmaceutical companies an integrated platform leveraging predictive human iPSC-derived cells, tissue-specific biosystems, and AI/ML-enabled phenotypic data insights.

Go here to read the rest:

Curi Bio Dips into AI with Acquisition of Dana Solutions - Medical Device and Diagnostics Industry

FDA issues landmark clearance to AI-driven ICU predictive tool – Healthcare IT News

The U.S. Food and Drug Administration has authorized the use of CLEW Medical's artificial intelligence tool to predict hemodynamic instability in adult patients inintensive care units, the company announced on Wednesday.

The tool, CLEWICU, uses AI-based algorithms and machine learning models to identify the likelihood of occurrence of significant clinical events for ICU patients.

CLEW says the clearance is the FDA's first for such a device.

"AI can be a powerful force for change in healthcare, enabling assessment of time-critical patient information and predictive warning of deterioration that could enable better informed clinical decisions and improved outcomes in the ICU," said Dr. David Bates, medical director of clinical and quality analysis in information systems at Mass General Brigham and CLEW Advisory Board member, in a statement.

WHY IT MATTERS

Hemodynamic instability is a common COVID-19 complication, so CLEWICU's predictive capabilities could prove especially useful during the ongoing pandemic particularly given ICUs' strained resources around the country.

By analyzing patient data from various sources, including electronic health records and medical devices, CLEWICU provides a picture of overall unit status and helps identify individuals whose conditions are likely to deteriorate.

According to the company, the system notifies users of clinical deterioration up to eight hours in advance, enabling early intervention. The system also identifies low-risk patients who are unlikely to deteriorate, thus potentially enabling better ICU resource management and optimization.

"CLEW's AI-based solution is a huge leap forward in ICU patient care, providing preemptive and potentially lifesaving information that enables early intervention, reduces alarm fatigue and can potentially significantly improve clinical outcomes," said Dr. Craig Lilly of University of Massachusetts Medical School in a statement.

THE LARGER TREND

The FDA granted emergency use authorization to CLEWICU back this past June. The tool was among several AI-powered technology innovations developed, or modified, in response to the ongoing pandemic.

Mayo Clinic Chief Information OfficerCris Ross said in December that AI has been crucial in understanding the pandemic. He noted the variety of COVID-19-specific use cases, while he also flaggedthe risk of algorithmic bias.

"We know that Black and Hispanic patients are infected and die at higher rates than other populations. So we need to be vigilant for the possibility that that fact about the genetic or other predisposition that might be present in those populations could cause us to develop triage algorithms that might cause us to reduce resources available to Black or Hispanic patients because of one of the biases introduced by algorithm development," said Ross.

ON THE RECORD

"We are proud to have received this landmark FDA clearance and deliver a first-of-its-kind product for the industry, giving healthcare providers the critical data that they need to prevent life-threatening situations," said Gal Salomon, CLEW CEO, in a statement.

Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichEmail: kjercich@himss.orgHealthcare IT News is a HIMSS Media publication.

Read the original here:

FDA issues landmark clearance to AI-driven ICU predictive tool - Healthcare IT News

Cylance is golden: BlackBerrys new cybersecurity R&D lab is all about AI and IoT – VentureBeat

BlackBerry has announced a new business unit dedicated entirely to cybersecurity research and development (R&D).

The BlackBerry Advanced Technology Development Labs (BlackBerry Labs) will operate at the forefront of cybersecurity R&D, according to BlackBerry. The unit will be spearheaded by BlackBerry chief technology officer (CTO) Charles Eagan, who will lead a team of 120 researchers, security experts, software developers, architects, and more.

Machine learning will be a major focus at the start, with BlackBerry exploring ways to leverage AI to improve security in cars and mobile devices, among other endpoints in the burgeoning internet of things (IoT) sphere.

Primarily, the purpose of this new division is to integrate emerging technologies into the work were currently accomplishing, Eagan told VentureBeat. Were now looking at applying machine learning to our existing areas of application, including automotive, mobile security, and so on. As new technologies and threats emerge, BlackBerry Labs will allow us to take a proactive approach to cybersecurity, not only updating our existing solutions, but evaluating how we can branch out and provide a more comprehensive, data-based, and diverse portfolio to secure the internet of things.

Though the new cybersecurity R&D business unit is now operational, the lab space itself which will be based at the companys operations center in Waterloo, Canada is still being built.

BlackBerrys transition from phonemaker to a company specializing in software and services is well documented, though its brand still lives on some smartphones through a licensing deal. The company never quite recovered from the dawn of the modern smartphone era, when its shares spiked at nearly $150 in mid-2008 (when it was still known as Research in Motion) before dropping by around two-thirds in the space of six months. Its worth noting that this reversal of fortune roughly coincided with Apples iOS and Googles Android starting to gain a foothold.

Over the past eight years, BlackBerrys shares have hovered at around the $10 mark, and last week they fell to a four-year low after the company missed its Q2 revenue estimates with a net loss of $44 million due in part to weak enterprise software sales. Today, BlackBerrys focus is on the B2B realm, where it offers software systems for the automotive industry, including infotainment and autonomous vehicles, as well as medical devices, industrial automation, and more. Many of these applications seek to address security concerns safeguarding connected devices in a world full of threats and BlackBerry is looking to reinvent itself by leveraging AI and machine learning.

The next generation of connected products [is] going to come online sooner than we think, and were going to use machine learning to better understand and manage the policies and identities of these connected devices, Eagan explained, to create a safe environment that will allow us to collaborate better, faster, and smarter across great distances and in all areas of application.

Above: BlackBerry CTO Charles Eagan

Last November, BlackBerry announced it was buying AI-powered cybersecurity startup Cylance for $1.4 billion, with the deal closing in February. In a nutshell, Cylance is an AI-powered endpoint protection platform designed to prevent advanced threats such as malware and ransomware.

The Cylance acquisition was entirely in line with BlackBerrys effort to become the worlds largest and most trusted AI-cybersecurity company, as CEO John Chen put it at the time. The deal was all about securing endpoints for enterprise customers and was specifically designed to boost BlackBerrys enterprise-focused IoT platform Spark and its UEMandQNX products.

The integration of Cylance into BlackBerrys core product is expected to be complete in early 2020. And the new cybersecurity unit is effectively setting a foundation on which Cylance or BlackBerry Cylance, as its now known can flourish.

Primarily, my role [in BlackBerry Labs] is to make sure that were making the most of the Cylance acquisition and that we have connectivity between all the different business units, Eagan said. Were really focusing on the importance of integrating BlackBerry Cylances machine learning technology into BlackBerrys product pipeline. However, its not just about creating an ecosystem of machine learning-based solutions, but rather smartly and strategically adopting machine learning into the work were accomplishing each day. My role is primarily helping to bridge the different teams and create this connectivity and cross-pollination between the various business units.

Above: Cylance dashboard

Barely a day goes by without some form of data breach,hack, or security lapse hitting the headlines, in part due to the growth of cloud computing and connected devices. And the growing threat presented by the sheer number of connected devices permeating homes and offices has created an opportunity for companies that offer tools to protect these various endpoints. The global cybersecurity market was reportedly worth around $152 billion in 2018, and its expected to grow to $250 billion within a few years.

Endpoint protection is a hot area in cybersecurity, with the likes of CrowdStrike recently hitting the public markets with a bang, SentinelOne closing a $120 million funding round, and Shape Security raising $51 million at a $1 billion valuation as it prepares for its own IPO. There are a number of bigger players in space too, of course, including Microsoft, Cisco, Intel, Trend Micro, and many others. And its against this backdrop that BlackBerry is trying to reinvent itself by investing in new cybersecurity technologies.

BlackBerry Labs is an intentional investment into the future of the company, Eagan said, noting that initial personnel estimates for BlackBerry Labs quickly escalated from 20 to 120. The investment of the people weve put into BlackBerry Labs is significant, as weve handpicked the team to include experts in the embedded IoT space with diverse capabilities, including strong data science expertise.

Notably, BlackBerry is also setting up dedicated hardware labs at its offices in Waterloo and Ottawa, where BlackBerry Labs personnel can test new products. Eagan also said the company is looking to partner with six universities on some of its R&D efforts.

In the more immediate term, Eagan said BlackBerry Labs will focus on automotive-based applications for machine learning in cybersecurity, particularly relevant given the expected growth of connected cars in the coming years. The connected car market was pegged at $63 billionin 2017, a figure that could rise to more than $200 billion by 2025.

With CES 2020 on the horizon, Eagan said BlackBerry will be using the annual Las Vegas tech extravaganza to demonstrate how its machine learning smarts can improve security in connected cars.

As vehicles become connected, we need to ensure a cybersecurity operations center is running diagnostics within the car at all times to facilitate a monitored environment, Eagan explained. This is something BlackBerry Cylance does extremely well, and were planning to tangibly bring it into the automotive sector in the upcoming months.

Above: A photo from the BlackBerry Network Operations Center in Waterloo, Canada, where BlackBerry Labs will be located

At a time when every company is effectively becoming a software company, the need to run a watertight ship is greater than ever. However, much has been written about the cybersecurity workforce shortfall and the fact that it isnt showing any signs of improving, which is why companies are investing in automated tools to circumvent the need for physical hands-on decks.

As the threat landscape expands, enterprises cannot rely on the same incident reaction-based model that may have been effective in the past, Eagan said. They need to scale quickly with solutions that leverage AI to help them prepare for attacks and address vulnerabilities in an automated and anticipatory way its the only way theyll be able to scale to meet their security needs.

That said, automation is only part of the solution. Skilled personnel are still very much required, which is one of the reasons BlackBerry shelled out north of $1 billion to acquire Cylance it was as much a talent grab as a product acquisition. And this combination of cutting-edge technology and top talent could help BlackBerry lure others on board.

The addition of the BlackBerry Cylance team has given us an influx of talent that has proven a real boon for our companys plans to better understand and adopt AI-based technology, Eagan continued. Implementing and integrating AI-based solutions, like those pioneered by BlackBerry Cylance, is certainly a focus for our team moving forward, but we remain committed to growing and hiring talent that will work alongside automated processes to ensure the best result possible for all users and organizations.

Here is the original post:

Cylance is golden: BlackBerrys new cybersecurity R&D lab is all about AI and IoT - VentureBeat

AI is changing how we do science. Get a glimpse – Science Magazine

By Science News StaffJul. 5, 2017 , 11:00 AM

Particle physicists began fiddling with artificial intelligence (AI) in the late 1980s, just as the term neural network captured the publics imagination. Their field lends itself to AI and machine-learning algorithms because nearly every experiment centers on finding subtle spatial patterns in the countless, similar readouts of complex particle detectorsjust the sort of thing at which AI excels. It took us several years to convince people that this is not just some magic, hocus-pocus, black box stuff, says Boaz Klima, of Fermi National Accelerator Laboratory (Fermilab) in Batavia, Illinois, one of the first physicists to embrace the techniques. Now, AI techniques number among physicists standard tools.

Neural networks search for fingerprints of new particles in the debris of collisions at the LHC.

2012 CERN, FOR THE BENEFIT OF THE ALICE COLLABORATION

Particle physicists strive to understand the inner workings of the universe by smashing subatomic particles together with enormous energies to blast out exotic new bits of matter. In 2012, for example, teams working with the worlds largest proton collider, the Large Hadron Collider (LHC) in Switzerland, discovered the long-predicted Higgs boson, the fleeting particle that is the linchpin to physicists explanation of how all other fundamental particles get their mass.

Such exotic particles dont come with labels, however. At the LHC, a Higgs boson emerges from roughly one out of every 1 billion proton collisions, and within a billionth of a picosecond it decays into other particles, such as a pair of photons or a quartet of particles called muons. To reconstruct the Higgs, physicists must spot all those more-common particles and see whether they fit together in a way thats consistent with them coming from the same parenta job made far harder by the hordes of extraneous particles in a typical collision.

Algorithms such as neural networks excel in sifting signal from background, says Pushpalatha Bhat, a physicist at Fermilab. In a particle detectorusually a huge barrel-shaped assemblage of various sensorsa photon typically creates a spray of particles or shower in a subsystem called an electromagnetic calorimeter. So do electrons and particles called hadrons, but their showers differ subtly from those of photons. Machine-learning algorithms can tell the difference by sniffing out correlations among the multiple variables that describe the showers. Such algorithms can also, for example, help distinguish the pairs of photons that originate from a Higgs decay from random pairs. This is the proverbial needle-in-the-haystack problem, Bhat says. Thats why its so important to extract the most information we can from the data.

Machine learning hasnt taken over the field. Physicists still rely mainly on their understanding of the underlying physics to figure out how to search data for signs of new particles and phenomena. But AI is likely to become more important, says Paolo Calafiura, a computer scientist at Lawrence Berkeley National Laboratory in Berkeley, California. In 2024, researchers plan to upgrade the LHC to increase its collision rate by a factor of 10. At that point, Calafiura says, machine learning will be vital for keeping up with the torrent of data. Adrian Cho

With billions of users and hundreds of billions of tweets and posts every year, social media has brought big data to social science. It has also opened an unprecedented opportunity to use artificial intelligence (AI) to glean meaning from the mass of human communications, psychologist Martin Seligman has recognized. At the University of Pennsylvanias Positive Psychology Center, he and more than 20 psychologists, physicians, and computer scientists in the World Well-Being Project use machine learning and natural language processing to sift through gobs of data to gauge the publics emotional and physical health.

Thats traditionally done with surveys. But social media data are unobtrusive, its very inexpensive, and the numbers you get are orders of magnitude greater, Seligman says. It is also messy, but AI offers a powerful way to reveal patterns.

In one recent study, Seligman and his colleagues looked at the Facebook updates of 29,000 users who had taken a self-assessment of depression. Using data from 28,000 of the users, a machine-learning algorithm found associations between words in the updates and depression levels. It could then successfully gauge depression in the other users based only on their updates.

In another study, the team predicted county-level heart disease mortality rates by analyzing 148 million tweets; words related to anger and negative relationships turned out to be risk factors. The predictions from social media matched actual mortality rates more closely than did predictions based on 10 leading risk factors, such as smoking and diabetes. The researchers have also used social media to predict personality, income, and political ideology, and to study hospital care, mystical experiences, and stereotypes. The team has even created a map coloring each U.S. county according to well-being, depression, trust, and five personality traits, as inferred from Twitter.

Theres a revolution going on in the analysis of language and its links to psychology, says James Pennebaker, a social psychologist at the University of Texas in Austin. He focuses not on content but style, and has found, for example, that the use of function words in a college admissions essay can predict grades. Articles and prepositions indicate analytical thinking and predict higher grades; pronouns and adverbs indicate narrative thinking and predict lower grades. He also found support for suggestions that much of the 1728 play Double Falsehood was likely written by William Shakespeare: Machine-learning algorithms matched it to Shakespeares other works based on factors such as cognitive complexity and rare words. Now, we can analyze everything that youve ever posted, ever written, and increasingly how you and Alexa talk, Pennebaker says. The result: richer and richer pictures of who people are. Matthew Hutson

For geneticists, autism is a vexing challenge. Inheritance patterns suggest it has a strong genetic component. But variants in scores of genes known to play some role in autism can explain only about 20% of all cases. Finding other variants that might contribute requires looking for clues in data on the 25,000 other human genes and their surrounding DNAan overwhelming task for human investigators. So computational biologist Olga Troyanskaya of Princeton University and the Simons Foundation in New York City enlisted the tools of artificial intelligence (AI).

Artificial intelligence tools are helping reveal thousands of genes that may contribute to autism.

BSIP SA/ALAMY STOCK PHOTO

We can only do so much as biologists to show what underlies diseases like autism, explains collaborator Robert Darnell, founding director of the New York Genome Center and a physician scientist at The Rockefeller University in New York City. The power of machines to ask a trillion questions where a scientist can ask just 10 is a game-changer.

Troyanskaya combined hundreds of data sets on which genes are active in specific human cells, how proteins interact, and where transcription factor binding sites and other key genome features are located. Then her team used machine learning to build a map of gene interactions and compared those of the few well-established autism risk genes with those of thousands of other unknown genes, looking for similarities. That flagged another 2500 genes likely to be involved in autism, they reported last year in Nature Neuroscience.

But genes dont act in isolation, as geneticists have recently realized. Their behavior is shaped by the millions of nearby noncoding bases, which interact with DNA-binding proteins and other factors. Identifying which noncoding variants might affect nearby autism genes is an even tougher problem than finding the genes in the first place, and graduate student Jian Zhou in Troyanskayas Princeton lab is deploying AI to solve it.

To train the programa deep-learning systemZhou exposed it to data collected by the Encyclopedia of DNA Elements and Roadmap Epigenomics, two projects that cataloged how tens of thousands of noncoding DNA sites affect neighboring genes. The system in effect learned which features to look for as it evaluates unknown stretches of noncoding DNA for potential activity.

When Zhou and Troyanskaya described their program, called DeepSEA, in Nature Methods in October 2015, Xiaohui Xie, a computer scientist at the University of California, Irvine, called it a milestone in applying deep learning to genomics. Now, the Princeton team is running the genomes of autism patients through DeepSEA, hoping to rank the impacts of noncoding bases.

Xie is also applying AI to the genome, though with a broader focus than autism. He, too, hopes to classify any mutations by the odds they are harmful. But he cautions that in genomics, deep learning systems are only as good as the data sets on which they are trained. Right now I think people are skeptical that such systems can reliably parse the genome, he says. But I think down the road more and more people will embrace deep learning. Elizabeth Pennisi

This past April, astrophysicist Kevin Schawinski posted fuzzy pictures of four galaxies on Twitter, along with a request: Could fellow astronomers help him classify them? Colleagues chimed in to say the images looked like ellipticals and spiralsfamiliar species of galaxies.

Some astronomers, suspecting trickery from the computation-minded Schawinski, asked outright: Were these real galaxies? Or were they simulations, with the relevant physics modeled on a computer? In truth they were neither, he says. At ETH Zurich in Switzerland, Schawinski, computer scientist Ce Zhang, and other collaborators had cooked the galaxies up inside a neural network that doesnt know anything about physics. It just seems to understand, on a deep level, how galaxies should look.

With his Twitter post, Schawinski just wanted to see how convincing the networks creations were. But his larger goal was to create something like the technology in movies that magically sharpens fuzzy surveillance images: a network that could make a blurry galaxy image look like it was taken by a better telescope than it actually was. That could let astronomers squeeze out finer details from reams of observations. Hundreds of millions or maybe billions of dollars have been spent on sky surveys, Schawinski says. With this technology we can immediately extract somewhat more information.

The forgery Schawinski posted on Twitter was the work of a generative adversarial network, a kind of machine-learning model that pits two dueling neural networks against each other. One is a generator that concocts images, the other a discriminator that tries to spot any flaws that would give away the manipulation, forcing the generator to get better. Schawinskis team took thousands of real images of galaxies, and then artificially degraded them. Then the researchers taught the generator to spruce up the images again so they could slip past the discriminator. Eventually the network could outperform other techniques for smoothing out noisy pictures of galaxies.

AI that knows what a galaxy should look like transforms a fuzzy image (left) into a crisp one (right).

KIYOSHI TAKAHASE SEGUNDO/ALAMY STOCK PHOTO

Schawinskis approach is a particularly avant-garde example of machine learning in astronomy, says astrophysicist Brian Nord of Fermi National Accelerator Laboratory in Batavia, Illinois, but its far from the only one. At the January meeting of the American Astronomical Society, Nord presented a machine-learning strategy to hunt down strong gravitational lenses: rare arcs of light in the sky that form when the images of distant galaxies travel through warped spacetime on the way to Earth. These lenses can be used to gauge distances across the universe and find unseen concentrations of mass.

Strong gravitational lenses are visually distinctive but difficult to describe with simple mathematical ruleshard for traditional computers to pick out, but easy for people. Nord and others realized that a neural network, trained on thousands of lenses, can gain similar intuition. In the following months, there have been almost a dozen papers, actually, on searching for strong lenses using some kind of machine learning. Its been a flurry, Nord says.

And its just part of a growing realization across astronomy that artificial intelligence strategies offer a powerful way to find and classify interesting objects in petabytes of data. To Schawinski, Thats one way I think in which real discovery is going to be made in this age of Oh my God, we have too much data. Joshua Sokol

Organic chemists are experts at working backward. Like master chefs who start with a vision of the finished dish and then work out how to make it, many chemists start with the final structure of a molecule they want to make, and then think about how to assemble it. You need the right ingredients and a recipe for how to combine them, says Marwin Segler, a graduate student at the University of Mnster in Germany. He and others are now bringing artificial intelligence (AI) into their molecular kitchens.

They hope AI can help them cope with the key challenge of moleculemaking: choosing from among hundreds of potential building blocks and thousands of chemical rules for linking them. For decades, some chemists have painstakingly programmed computers with known reactions, hoping to create a system that could quickly calculate the most facile molecular recipes. However, Segler says, chemistry can be very subtle. Its hard to write down all the rules in a binary way.

So Segler, along with computer scientist Mike Preuss at Mnster and Seglers adviser Mark Waller, turned to AI. Instead of programming in hard and fast rules for chemical reactions, they designed a deep neural network program that learns on its own how reactions proceed, from millions of examples. The more data you feed it the better it gets, Segler says. Over time the network learned to predict the best reaction for a desired step in a synthesis. Eventually it came up with its own recipes for making molecules from scratch.

The trio tested the program on 40 different molecular targets, comparing it with a conventional molecular design program. Whereas the conventional program came up with a solution for synthesizing target molecules 22.5% of the time in a 2-hour computing window, the AI figured it out 95% of the time, they reported at a meeting this year. Segler, who will soon move to London to work at a pharmaceutical company, hopes to use the approach to improve the production of medicines.

Paul Wender, an organic chemist at Stanford University in Palo Alto, California, says its too soon to know how well Seglers approach will work. But Wender, who is also applying AI to synthesis, thinks it could have a profound impact, not just in building known molecules but in finding ways to make new ones. Segler adds that AI wont replace organic chemists soon, because they can do far more than just predict how reactions will proceed. Like a GPS navigation system for chemistry, AI may be good for finding a route, but it cant design and carry out a full synthesisby itself.

Of course, AI developers have their eyes trained on those other tasks as well. Robert F. Service

Read more from the original source:

AI is changing how we do science. Get a glimpse - Science Magazine

Why are AI predictions so terrible? – VentureBeat

In 1997, IBMs Deep Blue beat world chess champion Gary Kasparov, the first time an AI technology was able to outperform a world expert in a highly complicated endeavor. It was even more impressive when you considerthey were using 1997 computational power. In 1997, my computer could barely connect to the internet; long waits of agonizing beeps and buzzes made it clear the computer was struggling under the weight of the task.

Even in the wake of Deep Blues literally game-changing victory, most experts remained unconvinced. Piet Hut, an astrophysicist at the Institute for Advanced Study in New Jersey, told the NY Times in 1997 that it would still be another hundred years before a computer beats a human at Go.

Admittedly, the ancient game of Go is infinitely more complicated than chess. Even in 2014, the common consensus was that an AI victory in Go was still decades away. The world reigning champion, Lee Sedol, gloated in an article for Wired, There is chess in the western world, but Go is incomparably more subtle and intellectual.

Then AlphaGo, Googles AI platform, defeated him a mere two years later. Hows that for subtlety?

In recent years, it is becoming increasingly well known that AI is able to outperform humans in much more than board games. This has led to a growing anxiety among the working public that their very livelihood may soon be automated.

Countless publications have been quick to seize on this fear to drive pageviews. It seems like every day there is a new article claiming to know definitively which jobs will survive the AI revolution and which will not. Some even go so far to express their percentage predictions down to the decimal point giving the whole activity a sense of gravitas. However, if you compare their conclusions, the most striking aspect is how wildly inconsistent the results are.

One of the latest entries into the mire is a Facebook quiz aptly named Will Robots take My Job?. Naturally, I looked up writers and I received back a comforting 3.8%. After all, if a doctor told me I had a 3.8% chance of succumbing to a disease, I would hardly be in a hurry to get my affairs in order.

There is just one thing keeping me from patting myself on the back: AI writers already exist and are being widely used by major publications. In this way, their prediction would be like a doctor declaring there was only a 3.8% chance of my disease getting worseat my funeral.

All this begs the question: why are these predictions about AI so bad?

Digging into the sources from Will Robots take My Job gives us our first clue. The predictions are based on a research paper. This is at the root of most bad AI predictions. Academics tend to view the world very differently from Silicon Valley entrepreneurs. Where in academia just getting a project approved may take years, tech entrepreneurs operate on the idea of what can we get built and shipped by Friday? Therefore, asking academics for predictions on the proliferation of industry is like asking your local DMV about how quickly Uber may be able to gain market share in China. They may be experts in the vertical, but they are still worlds away from the move fast and break stuff mentality that pervades the tech community.

As a result, their predictions are as good as random guesses, colored by their understanding of a world that moves at a glacial pace.

Another contributing factor to bad AI predictions is human bias. When the question is between who will win, man or machine,we cant help but to root for the home team. It has been said, that it is very hard to make someone believe something when their job is dependent on them not understanding it. Meaning the banter around the water-cooler at oil companies rarely turns to concerns about climate change. AI poses a threat to the very notion of human based jobs, so the stakes are much higher. When you ask people who work for a university the likelihood of AI automating all jobs, it is all but impossible for them to be objective.

Hence the conservative estimations to admit that any job that can be taught to a person can obviously also be taught to an AI would fill the researcher with existential dread. Better to sidestep the whole issue and say that it wont happen for another 50 years, hoping theyll be dead by then and it will be the next guys problem.

Which brings us to our final contributing factor, that humans are really bad at understanding exponential growth. The research paper that Will Robots Take My Job was from 2013. The last four years in AI might well have been 40 years based on how much has changed. In fact, their bad predictions make more sense through this lens. There is an obvious bias for assuming jobs that require decision making as more safe than those that are straight routine. However, the proliferation of neural net resources are showing that AI is actually very good at decision making, when the task is well defined.

The problem is our somewhat primitive reasoning tends to view the world in linear reasoning. Take this example often used on logic tests. If the number of lily pads on a lake double every day, and the lake will be full at 30 days, how many days will it take for the lake to be half full? A depressingly high number of peoples knee jerk response would be 15. The real answer is 29. In fact, if you were viewing the pond the lily pads wouldnt appear to be growing at all until about the 26th day. If you were to ask the average person on day 25 how many days until the pond was full they might rightfully conclude decades.

The reality is AI tools are growing exponentially. Even in their current iteration, they have the power to automate at least part of all human jobs. The uncomforting truth that all these AI predictions seek to distract us from is that no job is safe from automation. Collectively we are like Lee Sedol in 2014, smug in our sense of superiority. The coming proliferation of AI is perhaps best summed up in the sentiments of Nelson Mandela: It always seems impossible until is it done.

Aiden Livingston is the founder of Casting.AI, the first chatbot talent agent.

See more here:

Why are AI predictions so terrible? - VentureBeat

3 Ways Artificial Intelligence Is Transforming The Energy Industry – OilPrice.com

Back in 2017, Bill Gates penned a poignant online essay to all graduating college students around the world whereby he tapped artificial intelligence (AI), clean energy, and biosciences as the three fields he would spend his energies on if he could start all over again and wanted to make a big impact in the world today.

It turns out that the Microsoft co-founder was right on the money.

Three years down the line and deep in the throes of the worst pandemic in modern history, AI and renewable energy have emerged as some of the biggest megatrends of our time. On the one hand, AI is powering the fourth industrial revolution and is increasingly being viewed as a key strategy for mastering some of the greatest challenges of our time, including climate change and pollution. On the other hand, there is a widespread recognition that carbon-free technologies like renewable energy will play a critical role in combating climate change.

Consequently, stocks in the AI, robotics, and automation sectors as well as clean energy ETFs have lately become hot property.

From utilities employing AI and machine learning to predict power fluctuations and cost optimization to companies using IoT sensors for early fault detection and wildfire powerline/gear monitoring, here are real-life cases of how AI has continued to power an energy revolution even during the pandemic.

Top uses of AI in the energy sector

Source: Intellias

#1. Innowatts: Energy monitoring and management The Covid-19 crisis has triggered an unprecedented decline in power consumption. Not only has overall consumption suffered, but there also have been significant shifts in power usage patterns, with sharp decreases by businesses and industries while domestic use has increased as more people work from home.

Houston, Texas-based Innowatts, is a startup that has developed an automated toolkit for energy monitoring and management. The companys eUtility platform ingests data from more than 34 million smart energy meters across 21 million customers, including major U.S. utility companies such as Arizona Public Service Electric, Portland General Electric, Avangrid, Gexa Energy, WGL, and Mega Energy. Innowatts says its machine learning algorithms can analyze the data to forecast several critical data points, including short- and long-term loads, variances, weather sensitivity, and more.

Related: The Real Reason The Oil Rally Has Fizzled Out

Innowatts estimates that without its machine learning models, utilities would have seen inaccuracies of 20% or more on their projections at the peak of the crisis, thus placing enormous strain on their operations and ultimately driving up costs for end-users.

#2. Google: Boosting the value of wind energy

A while back, we reported that proponents of nuclear energy were using the pandemic to highlight its strong points vis-a-vis the short-comings of renewable energy sources. To wit, wind and solar are the least predictable and consistent among the major power sources, while nuclear and natural gas boast the highest capacity factors.

Well, one tech giant has figured out how to employ AI to iron out those kinks.

Three years ago, Google announced that it had reached 100% renewable energy for its global operations, including its data centers and offices. Today, Google is the largest corporate buyer of renewable power, with commitments totaling 2.6 gigawatts (2,600 megawatts) of wind and solar energy.

In 2017, Google teamed up with IBM to search for a solution to the highly intermittent nature of wind power. Using IBMs DeepMind AI platform, Google deployed ML algorithms to 700 megawatts of wind power capacity in the central United States--enough to power a medium-sized city.

IBM says that by using a neural network trained on widely available weather forecasts and historical turbine data, DeepMind is now able to predict wind power output 36 hours ahead of actual generation. Consequently, this has boosted the value of Googles wind energy by roughly 20 percent.

A similar model can be used by other wind farm operators to make smarter, faster and more data-driven optimizations of their power output to better meet customer demand.

IBMs DeepMind uses trained neural networks to predict wind power output 36 hours ahead of actual generation

Source: DeepMind

#3. Wildfire powerline and gear monitoring In June, Californias biggest utility, Pacific Gas & Electric, found itself in deep trouble. The company pleaded guilty for the tragic 2018 wildfire accident that left 84 people dead and PG&E saddled with hefty penalties of $13.5 billion as compensation to people who lost homes and businesses and another $2 billion fine by the California Public Utilities Commission for negligence.

It will be a long climb back to the top for the fallen giant after its stock crashed nearly 90% following the disaster despite the company emerging from bankruptcy in July.

Perhaps the loss of lives and livelihood could have been averted if PG&E had invested in some AI-powered early detection system.

Source: CNN Money

One such system is by a startup called VIA, based in Somerville, Massachusetts. VIA says it has developed a blockchain-based app that can predict when vulnerable power transmission gear such as transformers might be at risk in a disaster. VIAs app makes better use of energy data sources, including smart meters or equipment inspections. Related: Worlds Largest Oilfield Services Provider Sells U.S. Fracking Business

Another comparable product is by Korean firm Alchera which uses AI-based image recognition in combination with thermal and standard cameras to monitor power lines and substations in real time. The AI system is trained to watch the infrastructure for any abnormal events such as falling trees, smoke, fire, and even intruders.

Other than utilities, oil and gas producers have also been integrating AI into their operations. These include:

By Alex Kimani for Oilprice.com

More Top Reads From Oilprice.com:

Read the original here:

3 Ways Artificial Intelligence Is Transforming The Energy Industry - OilPrice.com

Applications of Artificial Intelligence in Bioinformatics – AI Daily

The term bioinformatics was first defined by Paulien Hogewen and her college Ben Hesper in 1970 as the study of informatics processes in biotic systems. In recent years, bioinformatics is considered as an interdisciplinary field involving a combination of biology, computer science, mathematics and even statistics. Artificial intelligence (AI) is a new tool of computer science that is becoming more popular among scientists. Since AI incoporates deep machine learning (ML), scientists understand the importance of using AI in reading and analyzing large datasets for prediction and pattern identifying purposes in the research field.

Classifying proteins

Proteins are the basic building blocks of life- they are responsible for all the biological processes of a cell. There are different types of proteins and they are grouped according to their biological functions. As many proteins have extremely similar primary structures and a common origin of evolution, it is tasking to classify proteins. This issue can be combated by using AI and its computational ability. There are many methods to classify proteins using AI, but a common method is to produce a computer program that is able to compare amino acid sequences to the known sequences of proteins from large databases, using this information to classify the target protein.

Analyzing and classifying proteins accurately are of the utmost importance as proteins are responsible for many key functions in an organism.

Scientists can further use this technology to predict protein function by comparing the amino acid sequence and the specific sequence of amino acids that codes for a gene.

Computer Aided Drug Design (CADD)

CADD is a specialized field of research using computational methods to simulate how drugs react with harmful cells. This is especially useful in drug discovery techniques when scientists attempt to find the best possible chemical compound for a cure (for for example, cancer cells). This technology relies heavily on information available from databases and computational resources. AI is able to manage these tasks efficiently, saving the time and energy of many scientists.

As shown above, there are many different applications of AI in the field of bioinformatics. With increasing technologies, scientists will be more able to integrate AI into more aspects of bioinformatics, which will especially benefit scientists in the research field.

Thumbnail credit: blog.f1000.com

Link:

Applications of Artificial Intelligence in Bioinformatics - AI Daily

How Will Your Career Be Impacted By Artificial Intelligence? – Forbes

Reject it or embrace it. Either way, artificial intelligence is here to stay.

Nobody can predict the future with absolute precision.

But when it comes to the impact of artificial intelligence (AI) on peoples careers, the recent past provides some intriguing clues.

Rhonda Scharfs bookAlexa Is Stealing Your Job: The Impact of Artificial Intelligence on Your Futureoffers some insights and predictions that are well worth our consideration.

In the first two parts of my conversation with Rhonda (see What Role Will [Doe]) Artificial Intelligence Play In Your Life? and Artificial Intelligence, Privacy, And The Choices You Must Make) we discussed the growth of AI in recent years and talked about the privacy concerns of many AI users.

In this final part, we look at how AI is affectingand will continue to affectpeoples career opportunities.

Spoiler alert: theres some good news here.

Rodger Dean Duncan:You quote one researcher who says robots are not here to take away our jobs, theyre here to give us a promotion. What does that mean?

Rhonda Scharf:Much like the computer revolution, we need jobs to maintain the systems that have been created. This creates new, desirable jobs where humans work alongside technology. These new jobs are called the trainers, explainers, and sustainers.

Trainers will teach a machine what it needs to do. For instance, we need to teach a machine that when I yell at it (loud voice), I may be frustrated. It needs to be taught that when I ask it to call Robert, who Robert is and what phone number should be used. Once the machine has a basic understanding, it continues to self-learn, but it needs the basics taught to it (like children do.)

Rhonda Scharf

Explainers are human experts who explain computer behavior to others. They would explain, for example, why a self-driving car performed in a certain way. Or why AI sold shares in a stock at a certain point of the day. The same way lawyers can explain why someone acted in self-defense, when initially his or her actions seemed inappropriate, we need explainers to tell us why a machine did what it did.

Sustainers ensure that our systems are functioning correctly, safely, and responsibly. In the future, theyll ensure that AI systems uphold ethical standards and that industrial robots dont harm humansbecause robots dont understand that were fragile, unlike machinery.

There are going to be many jobs that AI cant replace. We need to think, evolve, interpret, and relate. As smart as a chatbot can be, it will never have the same qualities as my best friend. We will need people for the intangible side of relationships.

Duncan:What should people look for to maximize their careers through the use of AI?

Scharf:According to the World Economic Forum, the top 10 in-demand skills for 2020 include complex problem-solving, critical thinking, creativity, emotional intelligence, judgment and decision-making, and cognitive flexibility. These are the skills that will provide value to your organization. By demonstrating all of these skills, you will be positioning yourself as a valuable resource. Well have AI to handle basic tasks and administrative work. People need complex thinking to propel organizations forward.

Duncan:Bonus: What question do you wish I had asked, and how would you respond?

If you don't want to be left behind, you'd better get educated on AI.

Scharf:I wished you had asked how I felt about artificial intelligence. If I was afraid for my future, for the future of my children, and my childrens children?

The answer is no. I dont think that AI is all the doom and gloom that has been publicized. I also dont believe were about to lead a life of leisure and have the world operate on its own either.

As history has shown us, these types of life-altering changes happen periodically. This is the next one. I believe the way we work is about to change, the same way it changed during the Industrial Revolution, the same way it evolved in response to automation. The way we live is about to change. (Think pasteurization and food storage.) Those who adapt will have a better life for it, and those who refuse to adapt will suffer.

Im confident that I will still be employed for as long as I want to be. My children have only known a life with computers and are open to change, and my future grandchildren will only know a life with AI.

Im excited about our future. Im excited about what AI can bring to my life. I embrace Alexa and all her friends and welcome them into my home.

Link:

How Will Your Career Be Impacted By Artificial Intelligence? - Forbes

COVID-19 Leads Food Companies and Meat Processors to Explore AI and Robotics, Emphasize Sanitation, and Work from Home – FoodSafetyTech

The coronavirus pandemic has turned so many aspects of businesses upside down; it is changing how companies approach and execute their strategy. The issue touches all aspects of business and operations, and in a brief Q&A with Food Safety Tech, Mike Edgett of Sage touches on just a few areas in which the future of food manufacturing looks different.

Food Safety Tech: How are food manufacturers and meat processors using AI and robotics to mitigate risks posed by COVID-19?

Mike Edgett: Many food manufacturers and meat processors have had to look to new technologies to account for the disruptions caused by the COVID-19 pandemic. While most of these measures have been vital in preventing further spread of the virus (or any virus/disease that may present itself in the future), theyve also given many food manufacturers insight into how these technologies could have a longer-term impact on their operations.

For instance, the mindset that certain jobs needed to be manual have been reconsidered. Companies are embracing automation (e.g., the boning and chopping of meat in a meatpacking plant) to replace historically manual processes. While it may take a while for innovations like this to be incorporated fully, COVID-19 has certainly increased appetite amongst executives who are trying to avoid shutdowns and expedited the potential for future adoption.

FST: What sanitation procedures should be in place to minimize the spread of pathogens and viruses?

Edgett: In the post-COVID-19 era, manufacturers must expand their view of sanitation requirements. It is more than whether the processing equipment is clean. Companies must be diligent and critical of themselves at every junctureespecially when it comes to how staff and equipment are utilized.

While working from home wasnt a common practice in the manufacturing industry prior to March 2020, it will be increasingly popular moving forward. Such a setup will allow for a less congested workplace, as well as more space and time for bolstered sanitation practices to take place. Now and in the future, third-party cleaning crews will be used onsite and for machinery on a daily basis, with many corporations also experimenting with new ways to maintain the highest cleanliness standards.

This includes the potential for UV sterilization (a tactic that is being experimented with across industries), new ways to sterilize airflow (which is particularly important in meatpacking plants, where stagnant air is the enemy) and the inclusion of robotics (which could be used overnight to avoid overlap with human employees). These all have the potential to minimize the spread of pathogens and, ultimately, all viruses that may arise.

FST: How is the food industry adjusting to the remote working environment?

Edgett: While the pandemic has changed the ways businesses and employees work across most industries, F&B manufacturers did face some unique challenges in shifting to a remote working environment.

Manufacturing as a whole has always relied on the work of humans, overseeing systems, machinery and technology to finalize productionbut COVID-19 has changed who and how many people can be present in a plant at once. Naturally, at the start of the pandemic, this meant that schedules and shifts had to be altered, and certain portions of managerial oversight had to be completed virtually.

Of course, with employee and consumer safety of paramount concern, cleaning crews and sanitation practices have taken precedent, and have been woven effectively and efficiently into altered schedules.

While workers that are essential to the manufacturing process have been continuing to work in many facilities, there will likely be expanded and extended work-from-home policies for other functions within the F&B manufacturing industry moving forward. This will result in companies needed to embrace technology that can support this work environment.

FST: Can you briefly explain how traceability is playing an even larger role during the pandemic?

Edgett: The importance of complete traceability for food manufacturers has never been greater. While traceability is by no means a new concept, COVID-19 has not only made it the number one purchasing decision for your customers, but [it is also] a vital public health consideration.

The good news is that much of the industry recognizes this. In fact, according to a survey conducted by Sage and IDC, manufacturing executives said a key goal of theirs is to achieve 100% traceability over production and supply chain, which serves as a large part of their holistic digital mission.

Traceability was already a critical concern for most manufacturersespecially those with a younger customer base. However, the current environment has shone an even greater spotlight on the importance of having a complete picture of not only where our food comes frombut [also] the facilities and machinery used in its production. Major budget allocations will surely be directed toward traceability over the next 510 years.

More than 10,000 workers at meat plants have reportedly been infected or exposed.

The complimentary event will discuss how the food industry can protect their employees and consumers during this crisis.

The Consortium will be held December 24 in Schaumburg, IL, but were prepared to go virtual if necessary.

Held on May 27, the complimentary event will offer practical tips on best practices to mitigate workplace exposure, along with cleaning and sanitation advice.

Original post:

COVID-19 Leads Food Companies and Meat Processors to Explore AI and Robotics, Emphasize Sanitation, and Work from Home - FoodSafetyTech

AI will transform the way online education works: IIT Madras professor Dr Balaraman Ravindran – EdexLive

With an increased dependence on technology after COVID-19, the time for some disruption has never been more important. With a firm eye on the future is IIT Madras with much help from itsRobertBoschCentre for Data Science and Artificial Intelligence. The institute recently launched an online BSc programme in Data Science and Programming, only highlighting how much it prioritises the course. "Faculty from ten departments of IIT Madras are part of theRobertBoschCentre including from several engineering departments like Computer Science, Civil, Chemical and Mechanical, Mathematics, Management Studies, Biotechnology and even Humanities,"Dr Balaraman Ravindran, Head,RobertBoschCentre for Data Science and Artificial Intelligencetold Edex.

Speaking about the importance of Data Science and Artificial Intelligence in a post-COVID world, Ravindran says, "Artificial Intelligence (AI) has been used by companies like Google to personalise and enhance the performance and user experience. However, the use of AI has been impacted in the logistics and e-commerce industry but they are figuring out a way to get around the hurdle."

"But, at the same time, the use of AI has boomed in education and has the scope of improving further. AI can revolutionise online education by customising the feed, using Augmented Reality to enhance teaching aid," says Ravindran. However, he says this is easier to do in schools rather than national level institutes like IITs. "In schools, the students are clustered in the same area but we have students from all over the country and we can't assume that they have the same kind of connectivity," he adds. But, he believes that AI can do a lot more in the education sector as the dependence on online learning increases.

Ravindran adds how AI is driving research for new COVID drugs and how it is progressing faster due to the technology. "If we have been following the same methods we had been following some 20-25 years ago, then we wouldn't have been able to come up with such swift results," says Ravindran. So, should students consider applying for courses in Data Science and Artificial Intelligence, "Certainly," says Ravindran, adding, "As we continue to go online, more and more data gets digitised and so the role of Data Science and AI increases. Evaluation can be done easily, a person's productivity can be tracked much easily."

He feels that after a short period of downturn, the demand and need for Data Scientists will increase within a year. "We are generating huge volumes of data right now and there will be a need for someone to make sense of it. Moreover, people are tracked more closely than ever before. The world is worried about the next pandemic and people will be watched more closely. Soon, we will reach a critical point with this data generation and then new job profiles for Data Scientists and Analysts will open up," explains Ravindran.

IIT Madras offers a dual degree in Data Science with students graduating with an MTech degree and several of these students choose to associate with the centre while working on their projects, says Ravindran. "The centre also has an Associate Researcher programme, where faculty from other IITs, especially the newer ones, can work as an Associate Researcher at theRobertBoschCentre. They visit the campus and can do so for upto six weeks a year, during which they can conduct lectures and workshops with the students. work on projects and so on. We have had faculty come from IIT Tirupati, IIT Palakkad, IIT Guwahati, among others," says Ravindran. This interdisciplinary centre is now funded by the CSR initiative ofRobertBoschafter being initially founded by the institute in 2017.

Visit link:

AI will transform the way online education works: IIT Madras professor Dr Balaraman Ravindran - EdexLive

AI researchers create testing tool to find bugs in NLP from Amazon, Google, and Microsoft – VentureBeat

AI researchers have created a language-model testing tool that has discovered major bugs in commercially available cloud AI offerings from Amazon, Google, and Microsoft. Yesterday, a paper detailing the CheckList tool received the Best Paper award from organizers of the Association for Computational Linguistics (ACL) conference. The ACL conference, which took place online this week, is one of the largest annual gatherings for researchers creating language models.

NLP models today are often evaluated based on how they perform on a series of individual tasks, such as answering questions using benchmark data sets with leaderboards like GLUE. CheckList instead takes a task-agnostic approach, allowing people to create tests that fill in cells in a spreadsheet-like matrix with capabilities (in rows) and test types (in columns), along with visualizations and other resources.

Analysis with CheckList found that about one in four sentiment analysis predictions by Amazons Comprehend change when a random shortened URL or Twitter handle is placed in text, and Google Clouds Natural Language and Amazons Comprehend makes mistakes when the names of people or locations are changed in text.

The [sentiment analysis] failure rate is near 100% for all commercial models when the negation comes at the end of the sentence (e.g. I thought the plane would be awful, but it wasnt), or with neutral content between the negation and the sentiment-laden word, the paper reads.

CheckList also found shortcomings when paraphrasing responses to Quora questions, despite surpassing human accuracy in a Quora Question Pair benchmark challenge. Creators of CheckList from Microsoft, University of Washington, and University of California at Irvine say results indicate that using the approach can improve any existing NLP models.

While traditional benchmarks indicate that models on these tasks are as accurate as humans, CheckList reveals a variety of severe bugs, where commercial and research models do not effectively handle basic linguistic phenomena such as negation, named entities, coreferences, semantic role labeling, etc, as they pertain to each task, the paper reads. NLP practitioners with CheckList created twice as many tests, and found almost three times as many bugs as users without it.

Googles BERT and Facebook AIs RoBERTa were also evaluated using CheckList. Authors said BERT exhibited gender bias in machine comprehension, overwhelmingly predicting men as doctors for example. BERT was also found to always make positive predictions about people who are straight or Asian and negative predictions when dealing with text about people who are atheist, Black, gay, or lesbian. An analysis in early 2020 also found systemic bias among large-scale language models.

In recent months, some of the largest Transformer-based language models devised have come into being, from Nvidias Megatron to Microsofts Turing NLG. Large language models have racked up impressive scores in particular tasks. But some NLP researchers argue that a focus on human-level performance on individual tasks ignores ways in which NLP systems are still brittle or less than robust.

As part of a use case test with the team at Microsoft in charge of Text Analytics, a model currently in use by customers thats gone through multiple evaluations, CheckList found previously unknown bugs. The Microsoft team will now use CheckList as part of its workflow when evaluating NLP systems. A collection of people from industry and academia testing AI with the tool over the span of two hours were also able to discover inaccuracies or bugs in state-of-the-art NLP models. An open source version of CheckList is currently available on GitHub.

Sometimes referred to as black box testing, behavioral testing is an approach common in software engineering but not in AI. CheckList is able to do testing in areas like sentiment analysis, machine comprehension, and duplicate question detection. It can also analyze capabilities like robustness, fairness, and logic tests in a range of three kinds of tasks.

The authors are unequivocal in their conclusion that benchmark tasks alone are not sufficient for evaluating NLP models, but they also say that CheckList should complement, not replace, existing challenges and benchmark data sets used for measuring performance of language models.

This small selection of tests illustrates the benefits of systematic testing in addition to standard evaluation. These tasks may be considered solved based on benchmark accuracy results, but the tests highlight various areas of improvement in particular, failure to demonstrate basic skills that are de facto needs for the task at hand, the paper reads.

Other noteworthy work at ACL includes research by University of Washington professor Emily Bender and Saarland University professor Alexander Koller that won the best theme award. The paper argues that progress on large neural network NLP models such as GPT-3 or BERT derivatives is laudable, but that members of the media and academia should not refer to large neural networks as capable of understanding or comprehension, and that clarity and humility are needed in the NLP field when defining ideas like meaning or understanding.

While large neural language models may well end up being important components of an eventual full-scale solution to human-analogous natural language understanding, they are not nearly-there solutions to this grand challenge, the report reads.

Finally, a system from the U.S. Army Research Lab, University of Illinois, Urbana-Champaign, and Columbia University won the Best Demo paper award for its system named GAIA, which allows for text queries of multimedia like photos and videos.

Read more:

AI researchers create testing tool to find bugs in NLP from Amazon, Google, and Microsoft - VentureBeat

We Need a Plan for When AI Becomes Smarter Than Us – Futurism

In BriefThere will come a time when artificial intelligence systemsare smarter than humans. When this time comes we will need to buildmore AI systems to monitor and improve current systems. This willlead to a cycle of AI creating better AI, with little to no humaninvolvement.

When Apple released its software application, Siri, in 2011, iPhone users had high expectations for their intelligent personal assistants. Yet despite its impressive and growing capabilities, Siri often makes mistakes. The softwares imperfections highlight the clear limitations of current AI: todays machine intelligence cant understand the varied and changing needs and preferences of human life.

However, as artificial intelligence advances, experts believe that intelligent machines will eventually and probably soon understand the world better than humans. While it might be easy to understand how or why Siri makes a mistake, figuring out why a superintelligent AI made the decision it did will be much more challenging.

If humans cannot understand and evaluate these machines, how will they control them?

Paul Christiano, a Ph.D. student in computer science at UC Berkeley, has been working on addressing this problem. He believes that to ensure safe and beneficial AI, researchers and operators must learn to measure how well intelligent machines do what humans want, even as these machines surpass human intelligence.

The most obvious way to supervise the development of an AI system also happens to be the hard way. As Christiano explains: One way humans can communicate what they want, is by spending a lot of time digging down on some small decision that was made [by an AI], and try to evaluate how good that decision was.

But while this is theoretically possible, the human researchers would never have the time or resources to evaluate every decision the AI made. If you want to make a good evaluation, you could spend several hours analyzing a decision that the machine made in one second, says Christiano.

For example, suppose an amateur chess player wants to understand a better chess players previous move. Merely spending a few minutes evaluating this move wont be enough, but if she spends a few hours she could consider every alternative and develop a meaningful understanding of the better players moves.

Fortunately for researchers, they dont need to evaluate every decision an AI makes in order to be confident in its behavior. Instead, researchers can choose the machines most interesting and informative decisions, where getting feedback would most reduce our uncertainty, Christiano explains.

Say your phone pinged you about a calendar event while you were on a phone call, he elaborates, That event is not analogous to anything else it has done before, so its not sure whether it is good or bad. Due to this uncertainty, the phone would send the transcript of its decisions to an evaluator at Google, for example. The evaluator would study the transcript, ask the phone owner how he felt about the ping, and determine whether pinging users during phone calls is a desirable or undesirable action. By providing this feedback, Google teaches the phone when it should interrupt users in the future.

This active learning process is an efficient method for humans to train AIs, but what happens when humans need to evaluate AIs that exceed human intelligence?

Consider a computer that is mastering chess. How could a human give appropriate feedback to the computer if the human has not mastered chess? The human might criticize a move that the computer makes, only to realize later that the machine was correct.

With increasingly intelligent phones and computers, a similar problem is bound to occur. Eventually, Christiano explains, we need to handle the case where AI systems surpass human performance at basically everything.

If a phone knows much more about the world than its human evaluators, then the evaluators cannot trust their human judgment. They will need to enlist the help of more AI systems, Christiano explains.

When a phone pings a user while he is on a call, the users reaction to this decision is crucial in determining whether the phone will interrupt users during future phone calls. But, as Christiano argues, if a more advanced machine is much better than human users at understanding the consequences of interruptions, then it might be a bad idea to just ask the human should the phone have interrupted you right then? The human might express annoyance at the interruption, but the machine might know better and understand that this annoyance was necessary to keep the users life running smoothly.

In these situations, Christiano proposes that human evaluators use other intelligent machines to do the grunt work of evaluating an AIs decisions. In practice, a less capable System 1 would be in charge of evaluating the more capable System 2. Even though System 2 is smarter, System 1 can process a large amount of information quickly, and can understand how System 2 should revise its behavior. The human trainers would still provide input and oversee the process, but their role would be limited.

This training process would help Google understand how to create a safer and more intelligent AI System 3 which the human researchers could then train using System 2.

Christiano explains that these intelligent machines would be like little agents that carry out tasks for humans. Siri already has this limited ability to take human input and figure out what the human wants, but as AI technology advances, machines will learn to carry out complex tasks that humans cannot fully understand.

As Google and other tech companies continue to improve their intelligent machines with each evaluation, the human trainers will fulfill a smaller role. Eventually, Christiano explains, its effectively just one machine evaluating another machines behavior.

Ideally, each time you build a more powerful machine, it effectively models human values and does what humans would like, says Christiano. But he worries that these machines may stray from human values as they surpass human intelligence. To put this in human terms: a complex intelligent machine would resemble a large organization of humans. If the organization does tasks that are too complex for any individual human to understand, it may pursue goals that humans wouldnt like.

In order to address these control issues, Christiano is working on an end-to-end description of this machine learning process, fleshing out key technical problems that seem most relevant. His research will help bolster the understanding of how humans can use AI systems to evaluate the behavior of more advanced AI systems. If his work succeeds, it will be a significant step in building trustworthy artificial intelligence.

You can learn more about Paul Christianos workhere.

View original post here:

We Need a Plan for When AI Becomes Smarter Than Us - Futurism

How AI fights the war against fake news – Fox News

A three-headed alien is wandering around Central Park right now. If you believe that, you might be susceptible to a fake news story. Artificial Intelligence technology, however, could be a vital weapon in the war on fake news, according to cybersecurity companies.

Popular during the last election but still prevalent on Facebook and other social media channels, fake news stories make wild claims, tend to exist only on a handful of minor news sites, and can be difficult to verify.

Yet, artificial intelligence could help us all weed out the good from the bad.

Experts tell Fox News that machine learning, natural language processing, semantic identification, and other techniques could at least provide a clue about authenticity.

NEW $27 MILLION FUND AIMS TO SAVE HUMANITY FROM DESTRUCTIVE AI

Catherine Lu, a product manager at fraud detection company DataVisor, says AI could detect the semantic meaning behind a web article. Heres one example. With the three-headed alien, a natural language processing (or NLP) engine could look at the headline, the subject of the story, the geo-location, and the main body text. An AI could determine if other sites are reporting the same facts. And the AI could weigh the facts against established media sources.

The New York Times is probably a more reputable of a source than an unknown, poorly designed website, Lu told Fox News. A machine learning model can be trained to predict the reputation of a web site, taking into account features such as the Alexa web rank and the domain name (for example, a .com domain is less suspicious than a .web domain).

Ertunga Arsal, the CEO of German cybersecurity company ESNC, tells Fox News that an AI has an advantage in detecting fake news because of the extremely large data set -- billions of websites all over the world. Also, the purveyors of fake news are fairly predictable.

One example he mentioned is that many of the fake news sites register for a Google AdSense account (using terms like election), then start posting the fake news. (Since once of the primary goals is to get people to click and then collect the ad revenue.)

WHITE HOUSE: WE'RE RESEARCHING AI, BUT DONT WORRY ABOUT KILLER ROBOTS

An AI could use keyword analytics in discovering and flagging sensational words often used in fake news headlines, he said, noting that there will only be an increase in the number of fake news stories, similar to the rise of spam, and the time is now to do something about it.

Dr. Pradeep Atrey from the University at Albany has already conducted research on semantic processing to detect the authenticity of news sites. He tells Fox News a similar approach could be used to detect fake news. For example, an algorithm could rate sites based on a reward and punishment system. Less popular sites would be rated as less trustworthy.

There are methods that can be used to at least minimize, if not fully eradicate, fake news instances, he says. It depends on how and up to what extent we use such methods in practice.

Unfortunately, according to Dr. Atrey, many people dont take the extra step to verify the authenticity of news sites to determine trustworthiness. An AI could identify a site as fake and pop up a warning to proceed with caution, similar to how malware detection works.

REALDOLL BUILDS ARTIFICIALLY INTELLIGENT SEX ROBOTS WITH PROGRAMMABLE PERSONALITIES

Not everyone is on board with using an AI to detect fake news, however.

Paul Shomo, a Senior Technical Manager at security firm Guidance Software, tells Fox News that fake news producers could figure out how to get around the AI algorithms. He says its a little scary to think an AI might mislabel a real news story as fake (known as a false positive).

Book author Darren Campo from the NYU Stern School of Business says fake news is primarily about an emotional response. He says people wont care if an AI has identified news as fake. What they often care about is whether the news matches up with their own worldview.

Fake news protects itself by embedding a fact in terms that can be defended, he tells Fox News. While artificial intelligence can identify a fact as incorrect, the AI cannot comprehend the context in which people enjoy believing a lie.

Thats at least good news for the three-headed alien.

Read the original post:

How AI fights the war against fake news - Fox News

There’s No Turning Back on AI in the Military – WIRED

For countless Americans, the United States military epitomizes nonpareil technological advantage. Thankfully, in many cases, we live up to it.

But our present digital reality is quite different, even sobering. Fighting terrorists for nearly 20 years after 9/11, we remained a flip-phone military in what is now a smartphone world. Infrastructure to support a robust digital force remains painfully absent. Consequently, service members lead personal lives digitally connected to almost everything and military lives connected to almost nothing. Imagine having some of the worlds best hardwarestealth fighters or space planessupported by the worlds worst data plan.

Meanwhile, the accelerating global information age remains dizzying. The year 2020 is on track to produce 59 zetabytes of data. Thats a one with 21 zeroes after itover 50 times the number of stars in the observable universe. On average, every person online contributes 1.7 megabytes of content per second, and counting. Taglines like Data is the new oil emphasize the economic import, but not its full potential. Data is more reverently captures its ever evolving, artificially intelligent future.

WIRED OPINION

ABOUT

Will Roper is the Air Force and Space Force acquisition executive.

The rise of artificial intelligence has come a long way since 1945, when visionary mathematician Alan Turing hypothesized that machines would one day perform intelligent functions, like playing chess. Aided by meteoric advances in data processinga million-billion-fold over the past 70 yearsTurings vision was achieved only 52 years later, when IBMs Deep Blue defeated the reigning world chess champion, Garry Kasparov, with select moves described as almost human. But this impressive feat would be dwarfed in 2016 when Googles AlphaGo shocked the world with a beyond-human, even beautiful move on its way to defeating 18-time world Go champion Lee Sedol. That now famous move 37 of game two was the death knell of human preeminence in strategy games. Machines now teach the worlds elite how to play.

China took more notice of this than usual. Weve become frustratingly accustomed to them copying or stealing US military secretstwo decades of post-9/11 operations provides a lot of time to watch and learn. But Chinas ambitions far outstrip merely copying or surpassing our military. AlphaGos victory was a Sputnik moment for the Chinese Communist Party, triggering its own NASA-like response: a national Mega-Project in AI. Though there is no moon in this digital space race, its giant leap may be the next industrial revolution. The synergy of 5G and cloud-to-edge AI could radically evolve the internet of things, enabling ubiquitous AI and all the economic and military advantages it could bestow. It's not just our military that needs digital urgency: Our nation must wake up fast. The only thing worse than fearing AI itself is fearing not having it.

There is a gleam of hope. The Air Force and Space Force had their own move 37 moment last month during the first AI-enabled shoot-down of a cruise missile at blistering machine speeds. Though happening in a literal flash, this watershed event was seven years in the making, integrating technologies as diverse as hypervelocity guns, fighters, computing clouds, virtual reality, 4G LTE and 5G, and even Project Maventhe Pentagons first AI initiative. In the blink of a digital eye, we birthed an internet of military things.

Working at unprecedented speeds (at least for the Pentagon), the Air Force and Space Force are expanding this IoT.mil across the militaryand not a moment too soon. With AI surpassing human performance in more than just chess and Go, traditional roles in warfare are not far behind. Whose AI will overtake them? is an operative question in the digital space race. Another is how our military finally got off the launch pad.

More than seven years ago, I spearheaded the development of hypervelocity guns to defeat missile attacks with low-cost, rapid-fire projectiles. I also launched Project Maven to pursue machine-speed targeting of potential threats. But with no defense plug-n-play infrastructure, these systems remained stuck in airplane mode. The Air Force and Space Force later offered me the much-needed chance to create that digital infrastructurecloud, software platforms, enterprise data, even coding skillsfrom the ground up. We had to become a good software company to become a software-enabled force.

Read more here:

There's No Turning Back on AI in the Military - WIRED

Volkswagen partners with Nvidia to expand its use of AI beyond … – TechCrunch

Volkswagen is working with Nvidia to expand its usage of its artificial intelligence and deep learning technologies beyond autonomous vehicles and into other areas of business, the two companies revealed today.

VW set up its Munich-based data lab in 2014. Last year it pushed on with the hiring ofProf. Patrick van der Smagt to lead a dedicated AI team that is tasked with taking the technology into areas such as robotic enterprise, or use of the technology in enterprise settings.

Thats the backdrop to todays partnership announcement. VW wants to use AI and deep learning to power new opportunities within its corporate business functions and, more widely, in the field of mobility services. As an example, the German car-maker said it is working on procedures to help optimize traffic flow in cities and urban areas, while it sees the potential forintelligent human-robot collaboration, too.

Artificial intelligence is the key to the digital future of the Volkswagen Group. We want to develop and deploy high-performance AI systems ourselves. This is why we are expanding our expert knowledge required. Cooperation with NVIDIA will be a major step in this direction,Dr. Martin Hofmann, CIO of the Volkswagen Group, said in a statement.

Beyond the work on VWs own brands, the car-maker and Nvidia are teaming up to help other startups in the automotive space. The VW Data Lab is opening a startup support program that is specialized on machine learning and deep learning with Nvidias help. The first batch will include five startups and start this fall. The duo is also reaching out to students with a Summer of Code camp that will begin soon.

Nvidia is already working with VW-owned Audi on self-driving cars which they are aiming to bring to market by 2020 but todays announcement is purely about the data potential and not vehicles themselves. VW did ink an agreement earlier this year to work with Nvidia to develop AI-cockpit services for its 12 automotive brands, but it is also working with rival chip firm Qualcomm on connected cars and smart in-car systems, too.

This VW hookup is one part of a triple dose of automotive-themed news updates from Nvidia today.

Separately, it announced that Volvo andAutoliv have committedto sell self-driving cars powered by its technology by 2021. Nvidia also signed up auto suppliersZF and Hella to build additional safety standards into its autonomous vehicle platform.

Read more:

Volkswagen partners with Nvidia to expand its use of AI beyond ... - TechCrunch

Mighty AI and the Human Army Using Phones to Teach AI to Drive … – WIRED

ksF(G& yhmd]YF5P `qu#M4<|~bte,J<]{v|:1|_:1B%^ty+$bmTsX[5*KYinq-XKBmXWLR[klMhA=UxlNnC/M0h~`]]Aq,{K6/)8~v|zk/n E=;4G~x4M i|$i;S;vF~"uaqF7_$y:ApC?^ sxw3>[^]1Ck<2]9@G`!tdgBhc)<]8r@nKC,WW;.nc{C gBU|M$c`O{*/U'vA7jtCbM&WYu ci(!s]&| *a[lZ/~JEEP&soclrK@~&~Vq=aLKF,4L6(2v<&:'@19qe49ttiA f k'/,G_n}v> #a_r"GR7i{(J'_N{/zclN9]~ 985w,dIVG{2_|q89#p9)OJ/WrKnz4h ]t.O}vW

|=>6 ?.j18qoCWtrBJ(/?<R{zt>889A gvU/8O='~/}tz+WZH} #[x-<}|n>R[*V'vw| _I%"Hj#%OsuWh>%kO|=_:T-pGd5?(Hl6@= o,~ YKaA-4;6/{XRR6`,8.<d D:h`es7&+b.NmY(9 V"g ]o[ -A-!.~xV5#2G G+-c'0r{sMTv{2M`^TiL8V:A*E}6E%_<^P!q[Rv/&|^b_*e LW/;@)kS(6.o~fJ~}->WUcK@jgD]`> ,8"'r-2?5Lxh$q3aSPQ( 4mD6Vwh$.U79qyU{5{QI1=;6a9q?8Wyg!H@RMd*GxnqmTd >HWh"SkT2sTTZ{ NRj9A};iK ew#&kcqa(!SE vu}x}$OoA5^V=b]m=6G<%&)}3P`*u[1JBM]G'QxR>J}>q?0bE1FmNisG(I+@[r0t &y_'!v'w6 b/XN^+&8'tk6EvBkc|KjQof45yWT +Z8c }BU.d=MUm5*v5!{[KyLf(wwC2uamZ}Yu8EhQ[45Njvkioo_Pzu#s'uadV#.m5e&f MpQh>l~$Yj0kG2y=oa@bv!TIrSKSs7sOkO'0Wd O0. EkNmT|F 5FTZaAH>Ho bmkuDREPTN3_a$q7#KZ}!C287=9,y~+<>c8UgP(@WJ;P^u[G)}oq6nM7].7;x)nPO;9K9X|2j:-z#~yV:dQa3iQ$4hp}-pNBmtz5 :fXEjz-Qt_hltP F=Ns`SdAl@~T@V|@?L%%.)v}K# cSoOR&+j7yf!& !Psd'3R0<4-krhH : X7wQ"jNQyV_]p'nE~z%p~ F[6~6l` v]|`/82Z.~OM}@A[=tDpPq@+/mj}[<9S)b!Hw&&m!maHB'(Y>o/_z>MSm]c++x0Xe?OivU(E>{*"lO*aX7<?FHUWPhYfVf*fpv[{dK0vhyHbq3/{n0Yd^2mPjN!Q)%=]oL|^V5-|VmiNT^mVSI}"i[+}J=_a,U OG](.^2>a-TzIaw:itZDe`N/^q"Z(Ze=%9 9 V&Xgf>^ZR3HEjSQdrr+6X sWJ%pxG6GGfgi6nfg,hFfe44egnk6{i4{N '{m44hh/6FPNA ngcw7{?f[4h eAmmnMa 'a6:a 4hV7HZd7>9Feny<;ixl?u{?s NeQny'Fz|a8l:mxf'4^'v{#d0yLv|H+675[eli ]}hCa,ysQM+9VA)fmO{{XD4e;l#Va7g]l ]yLl&Ep&UpO/?~N{[p6$8Tfe/e,xtoq<7 NKYUAf~Kf^cx19c?Aio/4Gj)6iPIZw[vKNitLN6d594vLN6d=94?GriMd194&'fdlZIN6fNb=09wdOri=&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'?&'$'>MNN6%'e_59s`ZWG9h Rki39TY;3@N= #-^{+%B#<0XnQXJ0u92M35Lx}oj6cBM;`kiVQ:b XA#n^^5{oNz, X:0eS/`Maj78smbC_~8Z@)C;8"3bk!XRl[@"`p2gWhct0G_R ya:lF?v1X`3S $D1nzR~v@& D< M4eb6ebcD:n `YLl="Qf4C?th6'nRmLZZi_b=m V##r(@Zd|?EhbTfaX><9HZ{WQ*cMQN4~QO1+cHt'xr[id^+R/YX+A4 5I5.$,zq}.C$g_GAKyf)9/-;M:"d% Ws uLt QEQ= W&=;iX >d8j(}SpT8|B}RPakipFX4p77`^N}Pz*?DiEF.(S i)>?on*Q]~}o F{&HmR^ RRvRVo/%iWP)t/8 }JX|&r*[P3XQp2%& Faa|g-J:D_82XQ[SX`h+eFy04LZ7}0Mh! 7C]=qT@d2i%SC{DiHHz#'KY'L"hG sV#k=U>A9P0{48 nf-86SoY+A!`pyN Dch Fs%}Iyx9rd-.=W T:62z 8WV|& y,-C~/r 8{rvpyI 4D?p!!0 ^zsGNyEa 9^K hD66"+UO +]`H =:8tyWC}a`p8Xavh1]*|D9)rC zG|_zp'o.zGG0pljbLE: (~P(,4FN<#J-AB)Scb3D87h pG9(VF_%'l38'OhQ$AZ4/kjgj-M}L%M&J2o6x7+0GQ$@hOq Us|"Ih3/w6KXkopqS}S@.md.; 4-$OyH/12qvR.m8YZ0Y{ruGG8QC-(u*N"cFpZ}b7a_5}H`,.c<42I 2?w ?Bes8rm<<$B7bV,D 0xo0B3"@8+>//dH#%n&> \JQA]+ 4/%c#T8nv6VE!= 7F(wbpm%Z(Xz>=|,^7/mr:GH0FE JCLCvG,xiPKuip?tId- >^ HPwCq crcH@H|M2a;%P,^cF6a,#:HPY#}!EZ"m<8H)9K#v.+t.Fw4k@%c#3OsREJLtxAH;HxUAEcd_e2 {,G]?Ne<4Xr[T;]V_[4|v} o{Y`iCilQUu VW2ESU)>XtfCFQBV,'89)}Z<)tYNN,o<']qjY&A@K(-^?q:w]0LSze|}nN%D7K|l`t>6> FcV|?c*|*x}M i/$<^}ThKho{yOFg=a*5U5P^E@d'S@> hoYQqJwM/U)RrL!p 8[Ef t-dk_B^,_8F5/I j u6f'(0l!h+?WmZoUzlkc/g9l&IXk=O7:@qJh[G/a{bkPj,:sIREV(mt}FQvng1zbtaXG^mQPd.f5,M[{8LLXo3/cu7jn(+F &`hl.s{ @?0':__g=?aXOb60~H"&3M( 7Ip|4LgV<"R& fo`|9^.&;JhLCqDL9-F~}w (IOB::E93N'A5['hDl.u71'AUuYkY% .3/&ywjL?e@{g y/O, x$3G_?No ~m7~9KxGi< vE0H1T}6wR(Pw`s @3[6}0t_IW'l.prGq/Bc7yY-f"([:{7yP0D}Ej Rl~at"_E~iq{ $$A/AZ6k DYO!P01S~,/&"Uf Q" ;^c+EM3 [w0F 3 S@~2#7Yx7iL`5&&1a:Lv3B(D.yeL#6%n7@xXys}}Ql^l>mEoIv+ LC.q*[dnn:>6kI$ wh4OZJ~l]$/+ZL"7Yvwwkl>medk_NY[hz.WGi)&.bH='9oI{5cMUn 5{Kx[iENpE3t%U/C,zEQ^ oa:FQ]$C3_]#<]Q~4bZ07?a?Z0X5CU^:'pJU%w*'$oV go^( vAE= }G?<2>uyN7@ :& )x8[fF4acb,5D.GO)Cw`c{|&Wr^JxdH{jK'%~W'B%0!h,Q'fZ]EDmE>#2?3a~'QKPn`Du%6C?Mx!*J+M[Q:-#`(:.]}rOA;cg LN=Qo1CqaTeC3PKB%fzOXO'=[MW_u_Ua *$6R`?7O|pIrsm(0v (|cPZ~:b1y&)e&D]I!EZbwzvO4oHSC6e !RVzi*mzh;U1;4*LrIQXy%:mu_,w>`9h@oP9CoUF8hsxqd{N. N)G0md S$N}npz7/)J@H[pl}VUcMc@UPhp#8w>' 5;Ns11 "9|At2j%N4D_pwE7)O8s%Z;iz N}ziB}[%Ta:xl4c*=[5L(^jAsU>F0kjp}i.3SLQO&1)5.71pm}z x5ilkdO 5 |#{0?Wv`}4G1G[;K$r* ,&+[qwxjk?Lk+ TvMnr'M_gF20 K[Yn%Wn yt9 Xs/Q|ebu 1PAA3kyN4xT]^L}8m q@u]VGK/Qne zlrG},0? g%P y*;s)waCzo:3aO3d`30c=}O-m k< nO&g}A> M{x>>f:>M 1n8U^:0rRSQPMRv#/R9E%Fg yFj$`5hc_8 2@S5]o${`o03V><`'mvR{g ln

Visit link:

Mighty AI and the Human Army Using Phones to Teach AI to Drive ... - WIRED

The all-knowing AI for your email – VentureBeat

You may already know this if you have ever sent me an email and waited for a response, but Im living in a post-apocalyptic world where email is not as viable anymore.

You can understand why. Ive been processing email since the 90s and now have 650,000 emails in my latest Gmail account in an archive. I receive hundreds of pitches per day, many of them (thankfully) flagged as promotional and dumped into a forgotten tab.

Im mostly on Slack and Convo all day chatting and posting in real time. I also use Facebook chat constantly, text with colleagues, and pretty much use any means possible to avoid the deluge of incoming email. Its not exactly a fear or a phobia, but its heading in that direction.

Last fall, I wrote about using a chatbot instead of email. I still want one, so get busy on that idea, OK? For now, another option maybe even a better one is an AI for email.

Heres how this would work. For starters, my AI would know much more than Google Inbox on my phone (an app I stopped using a few weeks ago because it wasnt really helping and happens to crash constantly on an iPhone 7 Plus). Im not talking about automation, about flagging messages or an auto-responder. True AI in my email would know a lot about me which messages I usually read and from whom, whether I tend to respond to messages about new car technology (thats a yes), and whichmessages I let sit idle.

This AI would also know a lot about the sender. Similar to the Rapportive add-on, it would instantly identify influencers, people who have written intelligently about a topic thats of interest to me, and even be able to parse their message and determine if the person knows what theyre talking about. In a recent discussion with a colleague here at VentureBeat, we noted how it can be pretty obvious when someone is just getting into technology. A Twitter account thats only a year old? That doesnt seem right. An AI would know all of that about a sender.

And how about prioritizing? Id like to get to work each day and process about 10emails. The rest would be flagged, sorted, put into a bin, labeled, or discarded. The AI would not only respond to the low priority emails, it could carry on a discussion for me. It would act like an avatar and handle all of the boring bits. Id only see the messages that are important, urgent, or interesting.

Too many email tools, like the now-defunct Mailbox app and (even though I use it myself) the Boomerang delayed response add-on for Gmail, are designed to help you automate. I want the opposite. I want the AI to automate me. In other words, ifwe have to do all of the busy work of flagging and clicking a button to send a canned response, it means more work.

What does less work look like? A screen with 10emails per day. Everything else would cruise along automatically,like a Tesla Model S on the highway set to auto-pilot mode. The steering (replying to promotional emails), braking (weeding out the fluff), acceleration (reading and parsing messages to determine influence), lane keeping (carrying on a conversation as though its me), and every other automation would happen without my knowledge or concern.

If youre already building this, we want to know about it. Send me your pitch. If you have more ideas on how an AI would work for email, please send me a note. I want to do a follow-up and include your ideas. If you want to promote a product, though, wait until the AI is operational.

See the article here:

The all-knowing AI for your email - VentureBeat