David Graves to Head New Research at PPPL for Plasma Applications in Industry and Quantum Information Science – Quantaneo, the Quantum Computing…

Graves, a professor at the University of California, Berkeley, since 1986, is an expert in plasma applications in semiconductor manufacturing. He will become the Princeton Plasma Physics Laboratorys (PPPL) first associate laboratory director for Low-Temperature Plasma Surface Interactions, effective June 1. He will likely begin his new position from his home in Lafayette, California, in the East Bay region of San Francisco.

He will lead a collaborative research effort to not only understand and measure how plasma is used in the manufacture of computer chips, but also to explore how plasma could be used to help fabricate powerful quantum computing devices over the next decade.

This is the apex of our thrust into becoming a multipurpose lab, said Steve Cowley, PPPL director, who recruited Graves. Working with Princeton University, and with industry and the U.S. Department of Energy (DOE), we are going to make a big push to do research that will help us understand how you can manufacture at the scale of a nanometer. A nanometer, one-billionth of a meter, is about ten thousand times less than the width of a human hair.

The new initiative will draw on PPPLs expertise in low temperature plasmas, diagnostics, and modeling. At the same time, it will work closely with plasma semiconductor equipment industries and will collaborate with Princeton University experts in various departments, including chemical and biological engineering, electrical engineering, materials science, and physics. In particular, collaborations with PRISM (the Princeton Institute for the Science and Technology of Materials) are planned, Cowley said. I want to see us more tightly bound to the University in some areas because that way we get cross-fertilization, he said.

Graves will also have an appointment as professor in the Princeton University Department of Chemical and Biological Engineering, starting July 1. He is retiring from his position at Berkeley at the end of this semester. He is currently writing a book (Plasma Biology) on plasma applications in biology and medicine. He said he changed his retirement plans to take the position at PPPL and Princeton University. This seemed like a great opportunity, Graves said. Theres a lot we can do at a national laboratory where theres bigger scale, world-class colleagues, powerful computers and other world-class facilities.

Exciting new direction for the Lab

Graves is already working with Jon Menard, PPPL deputy director for research, on the strategic plan for the new research initiative over the next five years. Its a really exciting new direction for the Lab that will build upon our unique expertise in diagnosing and simulating low-temperature plasmas, Menard said. It also brings us much closer to the university and industry, which is great for everyone.

The staff will grow over the next five years and PPPL is recruiting for an expert in nano-fabrication and quantum devices. The first planned research would use converted PPPL laboratory space fitted with equipment provided by industry. Subsequent work would use laboratory space at PRISM on Princeton Universitys campus. In the longer term, researchers in the growing group would have brand new laboratory and office space as a central part the Princeton Plasma Innovation Center (PPIC), a new building planned at PPPL.

Physicists Yevgeny Raitses, principal investigator for the Princeton Collaborative Low Temperature Plasma Research Facility (PCRF) and head of the Laboratory for Plasma Nanosynthesis, and Igor Kavanovich, co-principal investigator of PCRF, are both internationally-known experts in low temperature plasmas who have forged recent partnerships between PPPL and various industry partners. The new initiative builds on their work, Cowley said.

A priority research area

Research aimed at developing quantum information science (QIS) is a priority for the DOE. Quantum computers could be very powerful in solving complex scientific problems, including simulating quantum behavior in material or chemical systems. QIS could also have applications in quantum communication, especially in encryption, and quantum sensing. It could potentially have an impact in areas such as national security. A key question is whether plasma-based fabrication tools commonly used today will play a role in fabricating quantum devices in the future, Menard said. There are huge implications in that area, Menard said. We want to be part of that.

Graves is an expert on applying molecular dynamics simulations to low temperature plasma-surface interactions. These simulations are used to understand how plasma-generated ions, atoms and molecules interact with various surfaces. He has extensive research experience in academia and industry in plasma-related semiconductor manufacturing. That expertise will be useful for understanding how to make very fine structures and circuits at the nanometer, sub-nanometer and even atom-by-atom level, Menard said. Davids going to bring a lot of modeling and fundamental understanding to that process. That, paired with our expertise and measurement capabilities, should make us unique in the U.S. in terms of what we can do in this area.

Graves was born in Daytona Beach, Florida, and moved a lot as a child because his father was in the U.S. Air Force. He lived in Homestead, Florida; near Kansas City, Missouri; and in North Bay Ontario; and finished high school near Phoenix, Arizona.

Graves received bachelors and masters degrees in chemical engineering from the University of Arizona and went on to pursue a doctoral degree in the subject, graduating with a Ph.D. from the University of Minnesota in 1986. He is a fellow of the Institute of Physics and the American Vacuum Society. He is the author or co-author of more than 280 peer-reviewed publications. During his long career at Berkeley, he has supervised 30 Ph.D. students and 26 post-doctoral students, many of whom are now in leadership positions in industry and academia.

A leader since the 1990s

Graves has been a leader in the use of plasma in the semiconductor industry since the 1990s. In 1996, he co-chaired a National Research Council (NRC) workshop and co-edited the NRCs Database Needs for Modeling and Simulation of Plasma Processing. In 2008, he performed a similar role for a DOE workshop on low-temperature plasmas applications resulting in the report Low Temperature Plasma Science Challenges for the Next Decade.

Graves is an admitted Francophile who speaks (near) fluent French and has spent long stretches of time in France as a researcher. He was named Matre de Recherche (master of research) at the cole Polytechnic in Palaiseau, France, in 2006. He was an invited researcher at the University of Perpignan in 2010 and received a chaire dexcellence from the Nanoscience Foundation in Grenoble, France, to study plasma-graphene interactions.

He has received numerous honors during his career. He was appointed the first Lam Research Distinguished Chair in Semiconductor Processing at Berkeley for 2011-2016. More recently, he received the Will Allis Prize in Ionized Gas from the American Physical Society in 2014 and the 2017 Nishizawa Award, associated with the Dry Process Symposium in Japan. In 2019, he was appointed foreign expert at Huazhong University of Science and Technology in Wuhan, China. He served as the first senior editor of IEEE Transactions on Radiation and Plasma Medical Science.

Graves has been married for 35 years to Sue Graves, who recently retired from the City of Lafayette, where she worked in the school bus program. The couple has three adult children. Graves enjoys bicycling and yoga and the couple loves to travel. They also enjoy hiking, visiting museums, listening to jazz music, and going to the theater.

View original post here:
David Graves to Head New Research at PPPL for Plasma Applications in Industry and Quantum Information Science - Quantaneo, the Quantum Computing...

Could quantum machine learning hold the key to treating COVID-19? – Tech Wire Asia

Sundar Pichai, CEO of Alphabet with one of Googles quantum computers. Source: AFP PHOTO / GOOGLE/HANDOUT

Scientific researchers are hard at work around the planet, feverishly crunching data using the worlds most powerful supercomputers in the hopes of a speedier breakthrough in finding a vaccine for the novel coronavirus.

Researchers at Penn State University think that they have hit upon a solution that could greatly accelerate the process of discovering a COVID-19 treatment, employing an innovative hybrid branch of research known as quantum machine learning.

When it comes to a computer science-driven approach to identifying a cure, most methodologies harness machine learning to screen different compounds one at a time to see if they might bond with the virus main protease, or protein.

This process is arduous and time-consuming, despite the fact that the most powerful computers were actually condensing years (maybe decades) of drug testing into less than two years time. Discovering any new drug that can cure a disease is like finding a needle in a haystack, said lead researcher Swaroop Ghosh, the Joseph R. and Janice M. Monkowski Career Development Assistant Professor of Electrical Engineering and Computer Science and Engineering at Penn State.

It is also incredibly expensive. Ghosh says the current pipeline for discovering new drugs can take between five and ten years from the concept stage to being released to the market, and could cost billions in the process.

High-performance computing such as supercomputers and artificial intelligence (AI) canhelp accelerate this process by screeningbillions of chemical compounds quicklyto findrelevant drugcandidates, he elaborated.

This approach works when enough chemical compounds are available in the pipeline, but unfortunately this is not true for COVID-19. This project will explorequantum machine learning to unlock new capabilities in drug discovery by generating complex compounds quickly.

Quantum machine learning is an emerging field that combines elements of machine learning with quantum physics. Ghosh and his doctoral students had in the past developed a toolset for solving a specific set of problems known as combinatorial optimization problems, using quantum computing.

Drug discovery computation aligns with combinatorial optimization problems, allowing the researchers to tap the same toolset in the hopes of speeding up the process of discovering a cure, in a more cost-effective fashion.

Artificial intelligence for drug discovery is a very new area, Ghosh said. The biggest challenge is finding an unknown solution to the problem by using technologies that are still evolving that is, quantum computing and quantum machine learning. We are excited about the prospects of quantum computing in addressing a current critical issue and contributing our bit in resolving this grave challenge.

Joe Devanesan | @thecrystalcrown

Joe's interest in tech began when, as a child, he first saw footage of the Apollo space missions. He still holds out hope to either see the first man on the moon, or Jetsons-style flying cars in his lifetime.

Read more:
Could quantum machine learning hold the key to treating COVID-19? - Tech Wire Asia

New Tool Could Pave the Way for Future Insights in Quantum Chemistry – AZoQuantum

Written by AZoQuantumMay 13 2020

The amount of energy needed to make or disintegrate a molecule can now be calculated more accurately than traditional methods using a new machine learning tool. Although the new tool can only deal with simple molecules at present, it opens the door to gain future insights into quantum chemistry.

Using machine learning to solve the fundamental equations governing quantum chemistry has been an open problem for several years, and theres a lot of excitement around it right now.

Giuseppe Carleo, Research Scientist, Center for Computational Quantum Physics, Flatiron Institute

Carleo, who is the co-creator of the tool, added that better insights into the formation and degradation of molecules could expose the inner workings of the chemical reactions crucial to life.

Carleo and his colleagues Kenny Choo from the University of Zurich and Antonio Mezzacapo from the IBM Thomas J. Watson Research Center in Yorktown Heights, New York, published their study in Nature Communications on May 12th, 2020.

The tool developed by the researchers predicts the energy required to put together or break apart a molecule, for example, ammonia or water. For this calculation, it is necessary to determine the electronic structure of the molecule, which comprises the collective behavior of the electrons binding the molecule together.

The electronic structure of a molecule is complex to find and requires determining all the possible states the electrons in the molecule could be in, along with the probability of each state.

Electrons interact and entangle quantum mechanically with each other. Therefore, researchers cannot treat them individually. More electrons lead to more entanglements, and thus the problem turns exponentially more challenging.

There are no exact solutions for molecules that are more complex compared to the two electrons found in a pair of hydrogen atoms. Even approximations are not so accurate when more than a few electrons are involved.

One of the difficulties is that the electronic structure of a molecule includes states for an infinite number of orbitals that move further away from the atoms. Moreover, it is not easy to differentiate one electron from another, and the same state cannot be occupied by two electrons. The latter rule is the result of exchange symmetry, which governs the consequences when identical particles change states.

Mezzacapo and the team at IBM Quantum devised a technique for reducing the number of orbitals considered and enforcing exchange symmetry. This technique is based on approaches developed for quantum computing applications and renders the problem more analogous to scenarios in which electrons are restricted to predefined locations, for example, in a rigid lattice.

The problem was made more manageable by the similarity to rigid lattices. Earlier, Carleo trained neural networks to remodel the behavior of electrons restricted to the sites of a lattice.

The researchers could propose solutions to Mezzacapos compacted problems by extending those techniques. The neural network developed by the team calculates the probability for each state. This probability can be used to predict the energy of a specific state. The molecule is the most stable in the lowest energy level, also called the equilibrium energy.

Thanks to the innovations of the researchers, the electronic structure of a basic molecule can be calculated quickly and easily. To demonstrate the accuracy of their approaches, the researchers estimated the amount of energy required to break a real-world molecule and its bonds.

The researchers performed calculations for lithium hydride (LiH), dihydrogen (H2), water (H2O), ammonia (NH3), dinitrogen (N2), and diatomic carbon (C2). The researchers estimates for all the molecules were found to be highly accurate even in ranges where current methods struggle.

The aim of the researchers is to handle larger and more complex molecules by employing more advanced neural networks. One objective is to tackle chemicals such as those found in the nitrogen cycle, where nitrogen-based molecules are made and broken by biological processes to render them usable for life.

We want this to be a tool that could be used by chemists to process these problems.

Giuseppe Carleo, Research Scientist, Center for Computational Quantum Physics, Flatiron Institute

Carleo, Choo, and Mezzacapo are not the only researchers seeking to use machine learning to handle problems in quantum chemistry. In September 2019, they first presented their study on arXiv.org. In the same month, a research group in Germany and another one at Googles DeepMind in London reported their studies that involved using machine learning to reconstruct the electronic structure of molecules.

The other two groups made use of a similar method that does not constrain the number of orbitals considered. However, this inclusiveness is more computationally laborious, a disadvantage that will only worsen when more complex molecules are involved.

Using the same computational resources, the method employed by Carleo, Choo, and Mezzacapo produces higher accuracy; however, the simplifications performed to achieve this accuracy could lead to biases.

Overall, its a trade-off between bias and accuracy, and its unclear which of the two approaches has more potential for the future. Only time will tell us which of these approaches can be scaled up to the challenging open problems in chemistry.

Giuseppe Carleo, Research Scientist, Center for Computational Quantum Physics, Flatiron Institute

Choo, K., et al. (2020) Fermionic neural-network states for ab-initio electronic structure. Nature Communications. doi.org/10.1038/s41467-020-15724-9.

Source: https://www.simonsfoundation.org/

See the article here:
New Tool Could Pave the Way for Future Insights in Quantum Chemistry - AZoQuantum

J.P. Morgan Artificial Intelligence | J.P. Morgan

Manuela Veloso: So, at J.P. Morgan, the interesting thing is that we are a firm that has been around for a long time. But it's a firm that has a lot of appetite.

Ashleigh Thompson: One thing's for sure, no two days here ever look the same. I like to start my day in London early. Since we're a global team, it gives me the chance to review work our New York team did last night and catch up live with my colleagues in India.

Virgile Mison: The Machine Learning Center of Excellence develops and deploys machine learning models across different trading and IT platforms of J.P. Morgan.

Saket Sharma: J.P. Morgan, as a bank, has been incorporating machine learning into a lot of our work flows. So, as a Machine Learning Engineer, this is a great time to work on problems with firmwide impact.

Samik Chandarana: We need humans and AI to work together because ultimately, having and learning from what people are doing today in the processes they do and how they operate today provides a great amount of information of how we design systems of the future.

Andy Alexander: External conferences are really important for a number of reasons. One - it allows us to bring in the best of academia and external thought to the organization. The other is that it allows the team to go out to continue to learn. We rely a lot on where we're going, as well as where we've been.

Lidia Mangu: So, we come back from a conference knowing where the field is. And how, you know, taking those state of the art methods and applying them to the problems in the bank.

Simran Lamba: The most exciting and novel thing about working with AI Research is getting to publish our work at the most esteemed academic conferences like ICML, AAAI, and NeurIPS. We not only participate, but we also host and sponsor workshops at these conferences.

Naftali Cohen: I get to focus on the hot topics in AI and machine learning, such as reinforcement learning, cryptography and explainability.

Ashleigh Thompson: Millions of people use and rely upon our products and services every day. Working here, you have the ability to be on the forefront of changing that interaction.

Manuela Veloso: We apply and discover new AI techniques to handle complex problems such as trading, multi-agent market simulations, fraud detection, anti-money laundering and issues related to data.

Virgile Mison: As a technologist I was the most surprised by the wide variety of problems that we have to tackle and that J.P. Morgan is in the unique position to solve thanks to the large amount of data available.

Samuel Assefa: We focus on a number of research problems. One of the most exciting ones is ensuring that AI models are explainable, fair and unbiased.

Andy Alexander: In my life span, I dont expect to see generalized AI become something that's mainstream. And so for a lot of time we're expecting to see humans and machine helping each other.

Lidia Mangu: Every day is different. Every day we get a new challenging problem. Sometimes there is no known solution for that problem and it is like a new puzzle. Sometimes there is a known solution, but we show how we can do better using state of the art machine learning techniques.

Manuela Veloso: There is a lot of belief as we move that AI and machine learning is this one-shot deal. We do it, we are done. We'll never be done.

Naftali Cohen: I work with some of the best and most creative minds in the field and I have ownership over my work which is very rewarding.

Naftali Cohen: I'm researching how to apply innovative computer vision and deep learning techniques to understand the complexity of decision making in the financial market and recommend clients for market opportunities.

Simran Lamba: What excites me the most about my job here, in New York, is the opportunity to learn from our leaders and external professors. And my favorite part of the day would be brain-storming creative research ideas to solve challenges across all lines of businesses.

Simran Lamba: Im currently using event logs of Chase customers called Customer journeys to find ways to create an even better experience for our clients.

Manuela Veloso: We do believe that junior people are the ones, in some sense, that have that vision. That can think big and that they are not kind of like constrained.

Samik Chandarana: Our clients are getting younger they want to be interacting in different ways and we need fresh talent to come up and help us with those new ideas and actually implement them in a way that makes sense for the client experience.

Lidia Mangu: The advice I would give to a junior executive is to be open-minded. Not to be afraid to learn new things every day. The field is moving very fast.

Virgile Mison: There are many opportunities to learn at J.P. Morgan. Like collaborating with experts in natural language processing, deep learning, time series and reinforcement learning.

Ashleigh Thompson: I'm excited to be part of the transformation to a truly data-driven culture.

END

More:
J.P. Morgan Artificial Intelligence | J.P. Morgan

Artificial Intelligence – DAIC

Feature | Coronavirus (COVID-19) | May 04, 2020 | By Dave Fornell and Melinda Taschetta-Millane

In an effort to keep the imaging field updated on the latest information being released on coronavirus (COVID-19), the...

TeraRecon'sEnd-to-End AI Ecosystem

March 4, 2020SymphonyAI Group, an operating group of leading business-to-businessAIcompanies, today announced the...

AI vendor Infravision's InferRead CT Pneumonia software uses artificial intelligence-assisted diagnosis tp improve the overall efficiency of the radiology department. It is being delayed in China as a high sensitivity detection aid for novel coronavirus pneumonia (COVID-19).

February 28, 2020 New healthcare technologies are being implemented in the fight against the novel coronavirus (COVID...

The Caption Guidance software uses artificial intelligence to guide users to get optimal cardiac ultrasound images in a point of care ultrasound (POCUS) setting.

February 13, 2020 The U.S. Food and Drug Administration (FDA) cleared software to assist medical professionals in the...

GE Healthcare partnered with the AI developer Dia to provide an artificial intelligence algorithm to auto contour and calculate cardiac ejection fraction (EF). The app is now available on the GE Vscan pocket, point-of-care ultrasound (POCUS) system, as seen here displayed at RSNA 2019. Watch a VIDEO demo from RSNA.

February 7, 2020 At the 2019 Radiological Society of North America (RSNA) meeting in December, there was a record...

The Abbott Tendyne transcatheter mitral valve replacement (TMVR) system, left, became the first TMVR device to gain commercial regulatory clearance in the world. It gains European CE mark in January. Another top story in January was the first use of the Robocath R-One robotic cath lab catheter guidance system in Germany.Watch a VIDEO of the system in use in one of those cases.

News | February 03, 2020 | Dave Fornell, Editor

February 3, 2020 Here is the list of the most popular content on the Diagnostic and Interventional Cardiology (DAIC)...

Blog | January 24, 2020

The key question I am always asked at cardiology conferences is what are the trends and interesting new technologies I...

Cardiology was already heavily data driven, where clinical practice is driven by clinical study data, but mining a...

DAIC/ITN Editor Dave Fornell takes a tour of some of the most innovative new medical imaging technologies displayed on...

A new technology for detecting low glucose levels via electrocardiogram (ECG) using a non-invasive wearable sensor, which with the latest artificial intelligence (AI) can detect hypoglycemic events from raw ECG signals has been made by researchers from the University of Warwick.

January 13, 2020 A new technology for detecting low glucose levels via electrocardiogram (ECG) using a non-invasive...

The Consumer Electronic Show (CES) is the world's gathering place for consumer technologies, with more than 175,000...

January 9, 2020 Maulik Majmudar, M.D., chief medical officer at Amazon will be the keynote speaker at the upcoming...

This is the LVivo auto cardiac ejection fraction (EF) app that uses artificial intelligence (AI) from the vendor Dia,...

December 19, 2019 The U.S. Food and Drug Administration (FDA) has granted breakthrough status for a novel ECG-based...

DAICEditor Dave Fornell and Imaging Technology News (ITN) Consulting Editor Greg Freiherr offer a post-game report on...

Original post:
Artificial Intelligence - DAIC

You Have No Idea What Artificial Intelligence Really Does

WHEN SOPHIA THE ROBOTfirst switched on, the world couldnt get enough. It had a cheery personality, it joked with late-night hosts, it had facial expressions that echoed our own. Here it was, finally a robot plucked straight out of science fiction, the closest thing to true artificial intelligence that we had ever seen.

Theres no doubt that Sophia is an impressive piece of engineering. Parents-slash-collaborating-tech-companies Hanson Robotics and SingularityNET equipped Sophia with sophisticated neural networks thatgive Sophia the ability to learn from people and todetect and mirror emotional responses, which makes it seem like the robot has a personality. It didnt take much to convince people of Sophias apparent humanity many of Futurisms own articles refer to the robotas her. Piers Morgan even decided to try his luckfor a date and/or sexually harass the robot, depending on how you want to look at it.

Oh yeah, she is basically alive, Hanson Robotics CEO David Hanson said of Sophia during a 2017 appearance on Jimmy Fallons Tonight Show. And while Hanson Robotics never officially claimed that Sophia contained artificial general intelligence the comprehensive, life-like AI that we see in science fiction the adoring and uncritical press that followed all those public appearances only helped the company grow.

But as Sophia became more popular and people took a closer look, cracks emerged. It became harder to believe that Sophia was the all-encompassing artificial intelligence that we all wanted it to be. Over time, articles that might have once oohed and ahhed about Sophias conversational skills became more focused on the fact that they were partially scripted in advance.

Ben Goertzel, CEO of SingularityNET and Chief Scientist of Hanson Robotics, isnt under any illusions about what Sophia is capable of. Sophia and the other Hanson robots are not really pure as computer science research systems, because they combine so many different pieces and aspects in complex ways. They are not pure learning systems, but they do involve learning on various levels (learning in their neural net visual systems, learning in their OpenCog dialogue systems, etc.), he told Futurism.

But hes interested to find that Sophia inspires a lot of different reactions from the public. Public perception of Sophia in her various aspects her intelligence, her appearance, her lovability seems to be all over the map, and I find this quite fascinating, Goertzel said.

Hanson finds it unfortunate when people think Sophia is capable of more or less than she really is, but also said that he doesnt mind the benefits of the added hype. Hype which, again, has been bolstered by the two companies repeated publicity stunts.

Highly-publicized projects like Sophia convince us that true AI human-like and perhaps even conscious is right around the corner. But in reality, were not even close.

The true state of AI research has fallen far behind the technological fairy tales weve been led to believe. And if we dont treat AI with a healthier dose of realism and skepticism, the field may be stuck in this rut forever.

NAILING DOWN A TRUE definition of artificial intelligence is tricky.The field of AI, constantly reshaped by new developments and changing goalposts, is sometimes best described by explaining what it is not.

People think AI is a smart robot that can do things a very smart person would a robot that knows everything and can answer any question, Emad Mousavi, a data scientist who founded a platform called QuiGig that connects freelancers, told Futurism. But this is not what experts really mean when they talk about AI. In general, AI refers to computer programs that can complete various analyses and use some predefined criteria to make decisions.

Among the ever-distant goalposts for human-level artificial intelligence (HLAI) are the ability to communicate effectively chatbots and machine learning-based language processors struggle to infer meaning or to understand nuance and the ability to continue learning over time. Currently, the AI systems with which we interact, including those being developed for self-driving cars, do all their learning before they are deployed and then stop forever.

They are problems that are easy to describe but are unsolvable for the current state of machine learning techniques,Tomas Mikolov, a research scientist at Facebook AI, told Futurism.

Right now, AI doesnt have free will and certainly isnt conscious two assumptions people tend to make when faced with advanced or over-hyped technologies, Mousavi said. The most advanced AI systems out there are merely products that follow processes defined by smart people. They cant make decisions on their own.

In machine learning, which includes deep learning and neural networks, an algorithm is presented with boatloads of training data examples of whatever it is that the algorithm is learning to do, labeled by people until it can complete the task on its own. For facial recognition software, this means feeding thousands of photos or videos of faces into the system until it can reliably detect a face from an unlabeled sample.

Our best machine learning algorithms are generally just memorizing and running statistical models. To call it learning is to anthropomorphize machines that operate on a very different wavelength from our brains. Artificial intelligence is now such a big catch-all term that practically any computer program that automatically does something is referred to as AI.

If you train an algorithm to add two numbers, it will just look up or copy the correct answer from a table, Mikolov, the Facebook AI scientist, explained. But it cant generalize a better understanding of mathematical operations from its training. After learning that five plus two equals seven, you as a person might be able to figure out that seven minus two equals five. But if you ask your algorithm to subtract two numbers after teaching it to add, it wont be able to.The artificial intelligence, as it were, was trained to add, not to understand what it means to add. If you want it to subtract, youll need to train it all over again a process that notoriously wipes out whatever the AI system had previously learned.

Its actually often the case that its easier to start learning from scratch than trying to retrain the previous model, Mikolov said.

These flaws are no secret to members of the AI community. Yet, all the same, these machine learning systems are often touted as the cutting edge of artificial intelligence. In truth, theyre actually quite dumb.

Take, for example, an image captioning algorithm. A few years back, one of these got some wide-eyed coveragebecause ofthe sophisticated language it seemed to generate.

Everyone was very impressed by the ability of the system, and soon it was found that 90 percent of these captions were actually found in the training data, Mikolov told Futurism. So they were not actually produced by the machine; the machine just copied what it did see that the human annotators provided for a similar image so it seemed to have a lot of interesting complexity. What people mistook for a robotic sense of humor, Mikolov added, was just a dumb computer hitting copy and paste.

Its not some machine intelligence that youre communicating with. It can be a useful system on its own, but its not AI, said Mikolov. He said that it took a while for people to realize the problems with the algorithm. At first, they were nothing but impressed.

WHERE DID WE GO so off course?The problem is when our present-day systems, which are so limited, are marketed and hyped up to the point that the public believes we have technology that we have no goddamn clue how to build.

I am frequently entertained to see the way my research takes on exaggerated proportions as it progresses through the media, Nancy Fulda, a computer scientistworking on broader AI systems at Brigham Young University, told Futurism. The reporters who interview her are usually pretty knowledgeable, she said. But there are also websites that pick up those primary stories and report on the technology without a solid understanding of how it works. The whole thing is a bit like a game of telephone the technical details of the project get lost and the system begins to seem self-willed and almost magical. At some point, I almost dont recognize my own research anymore.

Some researchers themselves are guilty of fanning this flame. And then the reporters who dont have much technical expertise and dont look behind the curtain are complicit. Even worse, some journalists are happy to play along and add hype to their coverage.

Other problem actors: people who make an AI algorithm present the back-end work theydidas that algorithms own creative output. Mikolov calls this a dishonest practice akin to sleight of hand. I think its quite misleading that some researchers who are very well aware of these limitations are trying to convince the public that their work is AI, Mikolov said.

Thats important becausethe way people think AI research is going will depend on whether they want money allocated to it. This unwarranted hype could be preventing the field from making real, useful progress.Financial investments in artificial intelligence are inexorably linked to the level of interest (read: hype) in the field. That interest level and corresponding investments fluctuate wildly whenever Sophia has a stilted conversation or some new machine learning algorithm accomplishes something mildly interesting. That makes it hard to establish a steady, baseline flow of capital that researchers can depend on, Mikolov suggested.

Mikolov hopes to one day create a genuinely intelligent AI assistant a goal that he told Futurism is still a distant pipedream. A few years ago, Mikolov, along with his colleagues at Facebook AI,published a paperoutlining how this might be possible and the steps it might take to get there. But when we spoke at the Joint Multi-Conference on Human-Level Artificial Intelligence held in August by Prague-based AI startup GoodAI, Mikolov mentioned that many of the avenues people are exploring to create something like this are likely dead ends.

One of these likely dead ends, unfortunately, is reinforcement learning. Reinforcement learning systems, which teach themselves to complete a task through trial and error-based experimentation instead of using training data (think of a dog fetching a stick for treats), are often oversold, according to John Langford, Principal Researcher for Microsoft AI. Almost anytime someone brags about a reinforcement-learning AI system, Langford said, they actually gave the algorithm some shortcuts or limited the scope of the problem it was supposed to solve in the first place.

The hype that comes from these sorts of algorithms helps the researcher sell their work and secure grants. Press people and journalists use it to draw audiences to their platforms. But the public suffers this vicious cycle leaves everyone else unaware as to what AI can really do.

There are telltale signs, Mikolov says, that can help you see through the misdirection. The biggest red flag is whether or not you as a layperson (and potential customer) are allowed to demo the technology for yourself.

A magician will ask someone from the public to test that the setup is correct, but the person specifically selected by the magician is working with him. So if somebody shows you the system, then theres a good likelihood you are just being fooled, Mikolov said. If you are knowledgeable about the usual tricks, its easy to break all these so-called intelligent systems. If you are at least a little bit critical, you will see that what [supposedly AI-driven chatbots] are saying is very easy to distinguish from humans.

Mikolov suggests that you should question the intelligence of anyone trying to sell you the idea that theyve beaten the Turing Test and created a chatbot that can hold a real conversation. Again, think of Sophias prepared dialogue for a given event.

Maybe I should not be so critical here, but I just cant help myself when you have these things like the Sophia thing and so on, where theyre trying to make impressions that they are communicating with the robot at so on, Mikolov told Futurism.Unfortunately,its quite easy for people to fall for these magician tricks and fall for theillusion, unless youre a machine learning researcher who knows these tricks and knowswhats behind them.

Unfortunately, so much attention to these misleading projects can stand in the way of progress by people with truly original, revolutionary ideas. Its hard to get funding to build something brand new, something that might lead to AI that can do what people already expect it to be able to do, when venture capitalists just want to fund the next machine learning solution.

If we want those projects to flourish, if we ever want to take tangible steps towards artificial general intelligence, the field will need to be a lot more transparent about what it does and how much it matters.

I am hopeful that there will be some super smart people who come with some new ideas and will not just copy what is being done, said Mikolov. Nowadays its some small, incremental improvement. But there will be smart people coming with new ideas that will bring the field forward.

More on the nebulous challenges of AI: Artificial Consciousness: How To Give A Robot A Soul

Visit link:
You Have No Idea What Artificial Intelligence Really Does

Joint Artificial Intelligence Center

The Joint Artificial Intelligence Center (JAIC) is the Department of Defenses (DoD) Artificial Intelligence (AI) Center of Excellence that provides a critical mass of expertise to help the Department harness the game-changing power of AI. To help operationally prepare the Department for AI, the JAIC integrates technology development, with the requisite policies, knowledge, processes and relationships to ensure long term success and scalability.

The mission of the JAIC is to transform the DoD by accelerating the delivery and adoption of AI to achieve mission impact at scale. The goal is to use AI to solve large and complex problem sets that span multiple services; then, ensure the Services and Components have real-time access to ever-improving libraries of data sets and tools. The JAICs holistic approach includes:

The JAIC delivers AI capabilities to the Department through two distinct categories: National Mission Initiatives (NMIs) and Component Mission Initiatives (CMIs). NMIs are broad, joint, hard, cross-cutting AI/ML challenges that the JAIC will run using a cross-functional team approach. The CMIs component- specific and solve a particular problem. CMIs will be run by the components, with support from JAIC in a number of ways that include funding, data management, common foundation, integration into programs of record, and sustainment.

Read more from the original source:
Joint Artificial Intelligence Center

Artificial Intelligence | Releases | Discogs

Cat# Artist Title (Format) Label Cat# Country Year WARP CD6 Various Artificial Intelligence (CD, Comp) Sell This Version 592082 Various Artificial Intelligence (CD, Comp) Sell This Version RTD 126.1414.2 Various Artificial Intelligence (CD, Comp) Sell This Version 594082 Various Artificial Intelligence (Cass, Comp) Sell This Version WARP MC6, WARP MC 6 Various Artificial Intelligence (Cass, Comp) Sell This Version WARP LP6 Various Artificial Intelligence (LP, Comp) Sell This Version WARP LP 6 Various Artificial Intelligence (LP, Comp, TP, W/Lbl) Sell This Version TVT 7203-2 Various Artificial Intelligence (CD, Comp) Sell This Version TVT 7203-4 Various Artificial Intelligence (Cass, Comp) Sell This Version SRCS 7554 Various Artificial Intelligence (CD, Comp, RE) Sell This Version WARP CD6 Various Artificial Intelligence (CD, Comp, RE) Sell This Version WARP CD6 Various Artificial Intelligence (CD, Comp, RE) Sell This Version WARPCDD6 Various Artificial Intelligence (10xFile, MP3, Comp, RE, 320) WARP CD6 Various Artificial Intelligence (CD, Comp, RE) Sell This Version WARP CD6 Various Artificial Intelligence (CD, Comp, RE) Sell This Version WARP CD6 Various Artificial Intelligence (CD, Comp, RE) Sell This Version

Original post:
Artificial Intelligence | Releases | Discogs

Central to meeting the complexities of JADC2? Artificial intelligence – C4ISRNet

The concept of Joint All Domain Command-and-Control (JADC2) remains a nascent one, with clear doctrines yet to be defined and tested. However, no matter how these are shaped it is apparent that two key requirements must be addressed: speed of action, and the ability to process and analyze vast volumes of complex data that could not have been perceived of in the past.

The capabilities inherent in fifth generation aircraft, such as the F-35, exemplify the data management challenges that advanced systems bring and which must be addressed in multi-domain operations. The aircraft is as much a flying sensor as it is a combat platform, and the diversity and volume of data that it can collect places a significant burden on militaries if they are to benefit from it in a meaningful way. When factoring in the speed at which a conflict with a near-peer will be conducted and that it will extend beyond the traditional domains, there is a genuine risk that commanders could be overwhelmed by the data that needs to be processed in order to affect a winning outcome.

While significantly scaling up manpower could be one solution, the complexity of the data and speed of action required necessitates a step-change in capabilities in the command and control domain. It is here that potentially game-changing benefits can be brought through leveraging artificial intelligence.

JADC2 demands a comprehensive, dynamic, and near-real time common operating picture (COP) and AI can certainly aid in speeding-up decision making and defining parameters. AI can automate filtering and configuration based on prior experience. Beyond this, however, it promises the ability to examine command decisions and learn what should be done to achieve mission goals, automatically proposing and ranking actions.

Central to the utility of AI will be the availability of robust data, and the successful application of machine learning (ML) will be dependent on this. Machine learning has already proven its worth in anomaly detection and track correlation, taking this to the next level it will also be able to provide early warning of enemy actions. AI has the potential to recognize when an adversary is preparing their forces for a particular action and in a particular area, for example, by analyzing troop movements, aircraft sorties, and training activity. The technology could, in theory, automatically alert commanders, propose a course of action, and ultimately task units. The application of natural language understanding could even enable intelligence reports to be generated from disparate data.

AI also has a clear application in supporting resource-to-task management, such as in composing an air tasking order. Understanding which assets are available and best placed to complete a task is a significant challenge in a theatre-wide conflict, if AI can reach across all domains it will be able to appraise commanders of the most suitable resources to employ, including those that might not have been apparent with manpower alone. AI will also be able to quickly alert commanders and even automatically adjust orders as a mission unfolds or new intelligence emerges, for example, in editing an air tasking order to optimize the deployment of assets.

The utility of AI in enabling JADC2 is apparent. What is less clear, however, is how the best AI capabilities should be developed and fielded to ensure maximum affect across all services and domains. The need for a man in the loop is essential and the application of AI does not imply a change to autonomous systems and robotic warfare, but AI support must be regarded as trustworthy by operators and commanders.

A cohesive approach is essential in developing AI for JADC2 and services must consider themselves to be customers and suppliers of one another. Capabilities cannot be developed in silos. If services are not cognizant of the needs of a combined force there will inevitably be capability gaps and disconnects in the command structure and processes. This challenge is complicated further when considering the nature of operations, where the coalition is the norm.

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

The design of the core C2 systems employed for JADC2 is also a key consideration. The need for information sharing at speed and the ability to draw on a wide range of sources are crucial. Inherent in their design must be open architectures that enable new applications to be quickly developed and integrated, along with seamless interoperation between forces. Standards-driven designs are a must and it is essential that systems are not stovepiped and can reach not only across services, but the theatre of operation as a whole. Security issues will exist but must be overcome and not constitute a barrier for information and data sharing between domains.

Ensuring that partners have the requisite AI capabilities and access to relevant data is another hurdle. While disparities in capabilities is not a new issue, in the context of JADC2 where speed of action will be critical this is magnified.

There are many technological, doctrinal, and operational factors to consider in the implementation of AI, what is clear, however, is that the technology promises the ability to greatly shorten the OODA loop and bring a step change in C2 functionality. In a conflict with a near peer, AI will be a necessity rather than a luxury.

Retired Maj. Gen. Henrik Rboe Dam is the former head of the Royal Danish Air Force and the air domain adviser at Systematic.

Originally posted here:
Central to meeting the complexities of JADC2? Artificial intelligence - C4ISRNet

The Future of Artificial Intelligence: Edge Intelligence – Analytics Insight

With the advancements in deep learning, the recent years have seen a humongous growth of artificial intelligence (AI) applications and services, traversing from personal assistant to recommendation systems to video/audio surveillance. All the more as of late, with the expansion of mobile computing and Internet of Things (IoT), billions of mobile and IoT gadgets are connected with the Internet, creating zillions of bytes of information at the network edge.

Driven by this pattern, there is a pressing need to push the AI frontiers to the network edge in order to completely release the potential of the edge big data. To satisfy this need, edge computing, an emerging paradigm that pushes computing undertakings and services from the network core to the network edge, has been generally perceived as a promising arrangement. The resulting new interdiscipline, edge AI or edge intelligence (EI), is starting to get an enormous amount of interest.

In any case, research on EI is still in its earliest stages, and a devoted scene for trading the ongoing advances of EI is exceptionally wanted by both the computer system and AI people group. The dissemination of EI doesnt mean, clearly, that there wont be a future for a centralized CI (Cloud Intelligence). The orchestrated utilization of Edge and Cloud virtual assets, truth be told, is required to make a continuum of intelligent capacities and functions over all the Cloudifed foundations. This is one of the significant challenges for a fruitful deployment of a successful and future-proof 5G.

Given the expanding markets and expanding service and application demands put on computational data and power, there are a few factors and advantages driving the development of edge computing. In view of the moving needs of dependable, adaptable and contextual data, a lot of the data is moving locally to on-device processing, bringing about improved performance and response time (in under a couple of milliseconds), lower latency, higher power effectiveness, improved security since information is held on the device and cost savings as data-center transports are minimized.

Probably the greatest advantage of edge computing is the capacity to make sure about real-time results for time-sensitive needs. Much of the time, sensor information can be gathered, analyzed, and communicated immediately, without sending the information to a time-sensitive cloud center. Scalability across different edge devices to help speed local decision-making is fundamental. The ability to give immediate and dependable information builds certainty, increases customer engagement, and, in many cases, saves lives. Simply think about all of the businesses, home security, aviation, car, smart cities, health care in which the immediate understanding of diagnostics and equipment performance is critical.

Indeed, recent advances in AI may have an extensive effect in various subfields of ongoing networking. For example, traffic prediction and characterization are two of the most contemplated uses of AI in the networking field. DL is likewise offering promising solutions for proficient resource management and network adoption therefore improving, even today, network system performance (e.g., traffic scheduling, routing and TCP congestion control). Another region where EI could bring performance advantages is a productive resource management and network adaption. Example issues to address traffic scheduling, routing, and TCP congestion control.

Then again, today it is somewhat challenging to structure a real-time framework with overwhelming computation loads and big data. This is where EC enters the scene. An orchestrated execution of AI methods in the computing assets in the cloud as well as at the edge, where most information is produced, will help towards this path. In addition, gathering and filtering a lot of information that contain both network profiles and performance measurements is still extremely crucial and that question turns out to be much progressively costly while considering the need of data labelling. Indeed, even these bottlenecks could be confronted by empowering EI ecosystems equipped for drawing in win-win collaborations between Network/Service Providers, OTTs, Technology Providers, Integrators and Users.

A further dimension is that a network embedded pervasive intelligence (Cloud Computing integrated with Edge Intelligence in the network nodes and smarter-and-smarter terminals) could likewise prepare to utilize the accomplishments of the developing distributed ledger technologies and platforms.

Edge computing gives an option in contrast to the long-distance transfer of data between connected devices and remote cloud servers. With a database management system on the edge devices, organizations can accomplish prompt knowledge and control and DBMS performance wipes out the reliance on latency, data rate, and bandwidth. It also lessens threats through a comprehensive security approach. Edge computing gives an environment to deal with the whole cybersecurity endeavors of the intelligent edge and the wise cloud. Binding together management systems can give intelligent threat protection.

It maintains compliance regulations entities like the General Data Protection Regulation (GDPR) that oversee the utilization of private information. Companies that dont comply risk through a significant expense. Edge computing offers various controls that can assist companies with ensuring private data and accomplish GDPR compliance.

Innovative organizations, for example, Amazon, Google, Apple, BMW, Volkswagen, Tesla, Airbus, Fraunhofer, Vodafone, Deutsche Telekom, Ericsson, and Harting are presently embracing and supporting their wagers for AI at the edge. Some of these organizations are shaping trade associations, for example, the European Edge Computing Consortium (EECC), to help educate and persuade small, medium-sized, and large enterprises to drive the adoption of edge computing within manufacturing and other industrial markets.

Excerpt from:
The Future of Artificial Intelligence: Edge Intelligence - Analytics Insight