Patent application strategies in the field of artificial intelligence based on examination standards – Lexology

I. Introduction

Artificial Intelligence (AI) refers to an intelligence technology similar to human implemented by means of ordinary computer programs. With rapid development of artificial intelligence technology and continuous reflection of commercial values thereof, patent applications related to artificial intelligence technology have become a hot field in patent applications, and the number of applications is continuously rising, and scopes of application fields are also expanding.

This article aims attempts to provide some patent application strategies in the field of artificial intelligence based on latest examination standards in China, and summarize similarities and differences between examination standards in the field of artificial intelligence in China, Japan, Korea, US and Europe, for reference by patent applicants, and patent attorneys, etc.

II. Main laws involved and coping strategies

In China, as a patent application involving a computer program, the primary examination focus of a patent application in the field of artificial intelligence is whether the patent application is an eligible object protected by a patent, and another examination focus is the inventiveness as provided in Article 22, Paragraph 3 of the Chinese Patent Law.

Figure 1

Figure 1 shows the general examination process of a patent application in the field of artificial intelligence in China.

For a patent application in the field of artificial intelligence, it may be drafted as product claim or method claim, and the product claim may be drafted as an eligible subject, such as a system, a device, and a storage medium, etc.

Table 1 Forms of drafting of claims

Following description mainly focuses on analysis of latest examination standards of China and coping strategies regarding whether a patent application in the field of artificial intelligence belongs to an eligible object protected by a patent and whether it is in conformity with the provisions of inventiveness.

1. Examination standards and coping strategies regarding an eligible object protected by a patent

1.1 The latest examination standards on eligible object issues

It is provided in Article 25, Paragraph 1, Item (2) of the Chinese Patent Law that no patent right shall be granted for rules and methods for mental activities.

It is provided in the newly-amended Guidelines for Examination that if a claim contains a technical feature in addition to an algorithm feature or a commercial rule and a method feature, the claim as a whole is not a rule and method of an intellectual activity, and a possibility that it is granted a patent right shall not be excluded in accordance with Article 25, Paragraph 1, Item (2) of the Patent Law.

Moreover, it is provided in Rule 22, Paragraph 2 of the Implementing Regulations of the Chinese Patent Law that Invention as mentioned in the Patent Law means any new technical solution relating to a product, a process or an improvement thereof.

Correspondingly, it is provided in the newly-amended Guidelines for Examination that if steps involved in an algorithm in a claim reflect that they are closely related to the technical problem to be solved, for example, data processed by the algorithm are data having definite technical meanings in the technical field, execution of the algorithm is able to directly reflect a process of solving a technical problem by using natural laws, and produces a technical effect, then in general, the solution defined in this claim belongs to the technical solution provided in Article 2, Paragraph 2 of the Patent Law.

1.2 Application strategy for eligible object issues

Patent applications in the field of artificial intelligence may basically be divided into two types according to their application scopes: basic type patent applications and applied type patent applications. The so-called basic type patent application refers to that an algorithm involved in the patent application may be widely used in multiple particular fields, and the applied patent application refers to that an algorithm involved in the patent application is mainly combined with a particular field, and is an application in this field.

Taking two aspects into account, i.e. patent protection scope and conformity to examination requirements, ways of drafting the two types of patent applications are proposed for reference.

Table 2 Ways of drafting two types of patent applications

In addition, due to the development of Internet technology and big data technology, the artificial intelligence technology is also increasingly used in commercial and financial fields. In making an application for this type of patent, attention should be paid to combining a business rule, an algorithm feature and a technical feature in description.

Moreover, based on a stage of technological improvement, a patent application in the field of artificial intelligence may be divided into two stages: a training stage (learning stage) and an application stage. Following are corresponding ways of drafting.

Table 3 eligible subjects in two stages

2. Examination standard and coping strategy regarding inventiveness

2.1 Latest examination standards regarding inventiveness

It is provided in the newly-amended Guidelines for Examination that when examination regarding inventiveness is conducted on an application for patent for invention containing a technical feature and an algorithm feature, or a business rule and a method feature, the algorithm feature or the business rule and the method feature shall be taken into account together with the technical feature as a whole, when they functionally and mutually support the technical feature and have an interaction relationship between them and the technical feature.

2.2 Application strategy for examination on inventiveness

Based on the above examination standards, when an application for patent in the field of artificial intelligence is drafted, attention should be paid to combine an algorithm feature and a technical feature in describing the technical solution. Moreover, in describing a technical problem and a technical effect, emphasis should be placed on that the algorithm feature and the technical feature are specifically combined, and jointly solve the technical problem and produce a corresponding technical effect.

Furthermore, for some artificial intelligence patent applications not involved in improvement of a basic algorithm, their improvement points relative to existing technologies may mainly exist in application of an algorithm, such as a neural network, to a specific field, while the neural network itself is not changed much. For this type of patent applications, inventiveness may be considered mainly based on the following two aspects: first, whether the technical fields are similar; and second, a difficulty of applying the neural network to the technical field of the present application and whether a technical effect different from that in the original technical field is produced.

III. Comparison of examination standards of China, Japan, Korea, US and Europe

1. Comparison of examination standards of an eligible object protected by a patent

Comparisons of examination standards of an eligible object protected by a patent in China, Japan, Korea, US and Europe is as follows.

Table 4 Examination of an eligible object protected by a patent

in China, Japan, Korea, US and Europe

2. Comparison of examination standards of inventiveness

Comparison of examination standards of inventiveness in China, Japan, Korea, US and Europe is as follows.

Table 5 Examination of inventiveness in China, Japan, Korea, US and Europe

IV. Summary Patent applications in the field of artificial intelligence belong to patent applications involving computer programs, which need to meet the universal requirements on patent applications involving computer programs. Due to the specialty of the artificial intelligence technology, for patent applications in the field of artificial intelligence, the China National Intellectual Property Administration (CNIPA) has formulated new special examination regulations. Drafting of patent applications and the responses to examination opinions based on the latest examination standards are beneficial to applicants in obtaining patent rights of relevant technologies in China.

In addition, understanding of examination standards for patent applications in the field of artificial intelligence in major patent countries and regions in the world, namely China, Japan, Korea, US and Europe, is advantageous to global application strategy formulation and reasonable patent layout of applicants.

Go here to read the rest:
Patent application strategies in the field of artificial intelligence based on examination standards - Lexology

Will artificial intelligence have a conscience? – TechTalks

Does artificial intelligence require moral values? We spoke to Patricia Churchland, neurophilosopher and author of Conscience: The Origins of Moral Intuition

This article is part of the philosophy of artificial intelligence, a series of posts that explore the ethical, moral, and social implications of AI today and in the future

Can artificial intelligence learn the moral values of human societies? Can an AI system make decisions in situations where it must weigh and balance between damage and benefits to different people or groups of people? Can AI develop a sense of right and wrong? In short, will artificial intelligence have a conscience?

This question might sound irrelevant when considering todays AI systems, which are only capable of accomplishing very narrow tasks. But as science continues to break new grounds, artificial intelligence is gradually finding its way into broader domains. Were already seeing AI algorithms applied to areas where the boundaries of good and bad decisions are not clearly defined, such as criminal justice and job application processing.

In the future, we expect AI to care for the elderly, teach our children, and perform many other tasks that require moral human judgement. And then, the question of conscience and conscientiousness in AI will become even more critical.

With these questions in mind, I went in search of a book (or books) that explained how humans develop conscience and give an idea of whether what we know about the brain provides a roadmap for conscientious AI.

A friend suggested Conscience: The Origins of Moral Intuitionby Dr. Patricia Churchland, neuroscientist, philosopher, and professor emerita at the University of California, San Diego. Dr. Churchlands book, and a conversation I had with her after reading Conscience, taught me a lot about the extent and limits of brain science. Conscience shows us how far weve come to understand the relation between the brains physical structure and workings and the moral sense in humans. But it also shows us how much more we must go to truly understand how humans make moral decisions.

It is a very accessible read for anyone who is interested in exploring the biological background of human conscience and reflect on the intersection of AI and conscience.

Heres a very quick rundown of what Conscience tells us about the development of moral intuition in the human brain. With the mind being the main blueprint for AI, better knowledge of conscience can tell us a lot about what it would take for AI to learn the moral norms of human societies.

Conscience is an individuals judgment about what is normally right or wrong, typically, but not always, reflecting some standard of a group to which the individual feels attached, Churchland writes in her book.

But how did humans develop the ability to understand to adopt these rights and wrongs? To answer that question, Dr. Churchland takes us back through time, when our first warm-blooded ancestors made their apparition.

Birds and mammals are endotherms: their bodies have mechanisms to preserve their heat. In contrast, in reptiles, fish, and insects, cold-blooded organisms, the body adapts to the temperature of the environment.

The great benefit of endothermy is the capability to gather food at night and to survive colder climates. The tradeoff: endothermic bodies need a lot more food to survive. This requirement led to a series of evolutionary steps in the brains of warm-blooded creatures that made them smarter. Most notable among them is the development of the cortex in the mammalian brain.

The cortex can integrate diverse signals and pull out abstract representation of events and things that are relevant to survival and reproduction. The cortex learns, integrates, revises, recalls, and keeps on learning.

The cortex allows mammals to be much more flexible to changes in weather and landscape, as opposed to insects and fish, who are very dependent on stability in their environmental conditions.

But again, learning capabilities come with a tradeoff: mammals are born helpless and vulnerable. Unlike snakes, turtles, and insects, which hit the ground running and are fully functional when they break their eggshells, mammals need time to learn and develop their survival skills.

And this is why they depend on each other for survival.

The brains of all living beings have a reward and punishment system that makes sure they do things that support their survival and the survival of their genes. The brains of mammals repurposed this function to adapt for sociality.

In the evolution of the mammalian brain, feelings of pleasure and pain supporting self-survival were supplemented and repurposed to motivate affiliative behavior, Churchland writes. Self-love extended into a related but new sphere: other-love.

The main beneficiary of this change are the offspring. Evolution has triggered changes in the circuitry of the brains of mammals to reward care for babies. Mothers, and in some species both parents, go to great lengths to protect and feed their offspring, often at a great disadvantage to themselves.

In Conscience, Churchland describes experiments on the biochemical reactions of the brains of different mammals reward social behavior, including care for offspring.

Mammalian sociality is qualitatively different from that seen in other social animals that lack a cortex, such as bees, termites, and fish, Churchland writes. It is more flexible, less reflexive, and more sensitive to contingencies in the environment and thus sensitive to evidence. It is sensitive to long-term as well as short-term considerations. The social brain of mammals enables them to navigate the social world, for knowing what others intend or expect.

The brains of humans have the largest and most complex cortex in mammals. The brain of homo sapiens, our species, is three times as large as that of chimpanzees, with whom we shared a common ancestor 5-8 million years ago.

The larger brain naturally makes us much smarter but also has higher energy requirements. So how did we come to pay the calorie bill? Learning to cook food over fire was quite likely the crucial behavioral change that allowed hominin brains to expand well beyond chimpanzee brains, and to expand rather quickly in evolutionary time, Churchland writes.

With the bodys energy needs supplied, hominins eventually became able to do more complex things, including the development of richer social behaviors and structures.

So the complex behavior we see in our species today, including the adherence to moral norms and rules, started off as a struggle for survival and the need to meet energy constraints.

Energy constrains might not be stylish and philosophical, but they are as real as rain, Churchland writes in Conscience.

Our genetic evolution favored social behavior. Moral norms emerged as practical solutions to our needs. And we humans, like every other living being, are subject to the laws of evolution, which Churchland describes as a blind process that, without any goal, fiddles around with the structure already in place. The structure of our brain is the result of countless experiments and adjustments.

Between them, the circuitry supporting sociality and self-care, and the circuitry for internalizing social norms, create what we call conscience, Churchland writes. In this sense your conscience is a brain construct, whereby your instincts for caring, for self and others, are channeled into specific behaviors through development, imitation, and learning.

This is a very sensitive topic and complicated, and despite all the advances in brain science, many of the mysteries of the human mind and behavior remain unlocked.

The dominant role of energy requirements in the ancient origin of human morality does not mean that decency and honesty must be cheapened. Nor does it mean that they are not real. These virtues remain entirely admirable and worthy to us social humans, regardless of their humble origins. They are an essential part of what makes us the humans we are, Churchland writes.

In Conscience, Churchland discusses many other topics, including the role of reinforcement learning in the development of social behavior and the human cortexs far-reaching capacity to learn by experience, to reflect on counterfactual situations, develop models of the world, draw analogies from similar patterns and much more.

Basically, we use the same reward system that allowed our ancestors to survive, and draw on the complexity of our layered cortex to make very complicated decisions in social settings.

Moral norms emerge in the context of social tension, and they are anchored by the biological substrate. Learning social practices relies on the brains system of positive and negative reward, but also on the brains capacity for problem solving, Churchland writes.

After reading Conscience, I had many questions in mind about the role of conscience in AI. Would conscience be an inevitable byproduct of human-level AI? If energy and physical constraints pushed us to develop social norms and conscientious behavior, would there be a similar requirement for AI? Does physical experience and sensory input from the world play a crucial role in the development of intelligence?

Fortunately, I had the chance to discuss these topics with Dr. Churchland after reading Conscience.

What is evident from Dr. Churchlands book (and other research on biological neural networks), physical experience and constraints play an important role in the development of intelligence, and by extension conscience, in humans and animals.

But today, when we speak of artificial intelligence, we mostly talk about software architectures such as artificial neural networks. Todays AI is mostly disembodied lines of code that run on computers and servers and process data obtained by other means. Will physical experience and constraints be a requirement for the development of truly intelligent AI that can also appreciate and adhere to the moral rules and norms of human society?

Its hard to know how flexible behavior can be when the anatomy of the machine is very different from the anatomy of the brain, Dr. Churchland said in our conversation. In the case of biological systems, the reward system, the system for reinforcement learning is absolutely crucial. Feelings of positive and negative reward are essential for organisms to learn about the environment. That may not be true in the case of artificial neural networks. We just dont know.

She also pointed out that we still dont know how brains think. In the event that we were to understand that, we might not need to replicate absolutely every feature of the biological brain in the artificial brain in order to get some of the same behavior, she added.

Churchland reminded that while initially, the AI community largely dismissed neural networks, they eventually turned out to be quite effective when their computational requirements were met. And while current neural networks have limited intelligence in comparison to the human brain, we might be in for surprises in the future.

One of the things we do know at this stage is that mammals with cortex and with reward system and subcortical structures can learn things and generalize without a huge amount of data, she said. At the moment, an artificial neural network might be very good at classifying faces by hopeless at classifying mammals. That could just be a numbers problem.

If youre an engineer and youre trying to get some effect, try all kinds of things. Maybe you do have to have something like emotions and maybe you can build that into your artificial neural network.

One of my takeaways from Conscience was that humans generally align themselves with the social norms of their society, they also challenge them at times. And the unique physical structure of each human brain, the genes we inherit from our parents and the later experiences that we acquire through our lives make for the subtle differences that allow us to come up with new norms and ideas and sometimes defy what was previously established as rule and law.

But one of the much-touted features of AI is its uniform reproducibility. When you create an AI algorithm, you can replicate it countless times and deploy it in as many devices and machines as you want. They will all be identical to the last parametric values of their neural networks. Now, the question is, when all AIs are equal, will they remain static in their social behavior and lack the subtle differences that drive the dynamics of social and behavioral progress in human societies?

Until we have a much richer understanding of how biological brains work, its really hard to answer that question, Churchland said. We know that in order to get a complicated result out of a neural network, the network doesnt have to have wet stuff, it doesnt have to have mitochondria and ribosomes and proteins and membranes. How much else does it not have to have? We dont know.

Without data, youre just another person with an opinion, and I have no data that would tell me that youve got to mimic certain specific circuitry in the reinforcement learning system in order to have an intelligent network.

Engineers will try and see what works.

We have yet to learn much about human conscience, and even more about if and how it would apply to highly intelligent machines. We do not know precisely what the brain does as it learns to balance in a headstand. But over time, we get the hang of it, Churchland writes in Conscience. To an even greater degree, we do not know what the brain does as it learns to find balance in a socially complicated world.

But as we continue to observe and learn the secrets of the brain, hopefully we will be better equipped to create AI that serves the good of all humanity.

Link:
Will artificial intelligence have a conscience? - TechTalks

Tackling the artificial intelligence IP conundrum – TechHQ

Artificial intelligence has become a general-purpose technology. Not confined to futuristic applications such as self-driving vehicles, it powers the apps we use daily, from navigation with Google Maps to check deposits from our mobile banking app. It even manages the spam filters in our inbox.

These are all-powerful, albeit functional roles. Whats perhaps more exciting is AIs growing potential in sourcing and producing new creations and ideas, from writing news articles to discovering new drugs in some cases, far quicker than teams of human scientists.

With every new iteration in software design, computing power, and ability to leverage large data sets, AIs potential as an initiator of ideas and concepts grow, and this raises questions around its rights to Intellectual Property (IP).

Professor of Law and Health Sciences at the University of Surrey, Dr. Ryan Abbotts work is focused on the meeting of law and technology in particular, the regulation of AI. While Abbott doesnt believe AI should be entitled to its own IP, he believes the time is right to discuss the ability of people to own IP generated autonomously by AI or risk losing out on the technologys full potential.

Right now, we have a system where AI and human activity are treated very differently, Abbott told TechHQ.

Drug discovery is a tangible example of how AI contributes to society. Technology is making the discovery of new drugs faster, cheaper, and more successful. Its been used this way for decades, helping to identify new drug targets or validate drug candidates, and to help design trials in ways that can potentially shorten drug development timeframes, bringing treatments to market faster. But the critical nature of patent protection in life sciences, drug development, in particular, is holding back these advances.

Thats because, when it comes to AI-generated content and ideas, AI tends to be seen by experts and lawmakers as a tool, and not the source of the creation or discovery. In the same way that a paintbrush doesnt get the credit for an oil painting and CAD software isnt credited for the designs of an architect, AI is perceived as a vehicle to an end product. The trouble is, current laws are not consistent and clear cut. In the UK, where a work lacks a traditional human author, the producer of the work is deemed the author. In the US, the inventor is the person who conceives the idea. In either case, neither human may know what the AI system will produce or discover.

While patent rules in life sciences highlight the legal constraints on AI in research and development, these same challenges affect everything from the development of components for cars to spacecraft. The problem will become increasingly apparent as AI continues to improve, and people do not.

The consensus among legal experts is that its not clear whether AI could carry out the understood rights and obligations of an IP owner. IP rights are restricted to natural persons and legal entities such as businesses. The European Union reportedly abandoned plans to introduce a third type of entity electronic personality in 2018 after pressure from 150 experts on AI, robotics, IP, and ethics.

Speaking to Raconteur previously, Julie Barrett-Major, consulting attorney at AA Thornton and member of the Chartered Institute of Patent Attorneys International Liaison Committee, explained: With patent ownership come certain obligations and responsibilities or at least opportunities to exercise these. For example, to enforce the rights awarded, the owner can sue for infringement or at least indicate a willingness to do so to maintain exclusivity.

[] the patent must be renewed at regular intervals, and there are other actions that need to be taken to ensure the rewards are not diluted, such as updating the government registers of patents with details of changes in ownership details, informing of licensees and so forth.

Abbott argues that, ultimately, the limitations of current IP frameworks may force organizations to continue to use people, where a machine might be more efficient.

Last year, Siemens was unable to file for multiple patents on inventions they believed to be patentable because they could not identify a human inventor. The involved engineers stated that the machine did the inventive work. Abbott himself is carrying out a legal test case, filing patents for two inventions made autonomously by AI. Both have been rejected from the US, UK, German, and European patent offices on the basis they failed to disclose a human inventor. The rejections are under appeal, but the idea to help raise dialogue on the issue.

Most of the time today AI is just working as a tool and helping augment people, but machines are getting increasingly autonomous and sophisticated, and increasingly doing the sorts of things that used to make someone an inventor, Abbott said.

The current status quo means that law can get in the way of AI development in certain areas, but not others. That means AIs benefits are not evenly spread across industries. While parents are important to drug development, for example, they are less important when it comes to making software. This imbalance could lead to the emergence of shady IP practices in certain sectors when it comes to using AI. The workaround, says Abbott, is people simply not disclosing AIs role in creating something valuable, whether thats an article, video, or song. Someone can just list themselves as the author and no one is going to question that.

The issue of patents and intellectual property in the fields of academic research, for many of us, might not seem like its worth our consideration. But the broader legal concept Ryan looks to highlight that we should question current standards of AI accountability and ownership affect how AI is being used around us.

Across all areas of the law, we are seeing the phenomenon of artificial intelligence stepping into the shoes of people and doing the sorts of things that people used to do, said Abbott.

Ultimately, for AI to be used to its full potential, there must be an open discussion, public consultation, and debates on the current litigation surrounding AI. Thats now happening. The issue has had recent attention by the World Intellectual Property Organization (WIPO) while the UK Intellectual Property Office just announced a public request for comments on whether the IP system is fit for purpose in light of AI. The US has just completed a similar consultation.

These efforts are a solid start to getting a diverse range of input from stakeholders, said Abbott. In time, legislators should get involved.

Originally posted here:
Tackling the artificial intelligence IP conundrum - TechHQ

Why Artificial Intelligence Should Be on the Menu this Season – FSR magazine

The perfect blend of AI collaboration needs workers to focus on the tasks where they excel.

Faced with the business impacts of one of the largest health crises to date, restaurants of all sizes are in a pivotal moment in time where every decisionshort term and long termcounts. For their businesses to survive, restaurant owners have had to act fast by rethinking operations and introducing pandemic-related initiatives.

Watching the worlds largest chains all the way down to the local mom-and-pops become innovators in such extreme times has shown the industrys tenacity and survival instinct, even when all odds are stacked against their favor. None of these initiatives would be possible without technology as the driving factor.

Why AI is on the Menu This Season

A recent Dragontail Systems survey found that 70 percent of respondents would be more comfortable with delivery if they were able to monitor their orders preparation from start to finish. Consumers want to be at the forefront of their meals creationthey dont want to cook it, but they do want to know it was prepared in a safe environment and delivered hot and fresh to their door.

Aside from AIs role on the back-end helping with preparation time estimation and driver scheduling, the technology is now being used in cameras, for example, which share real-time images with consumers so that they can be sure their orders are handled with care. Amid the pandemic, this means making sure that gloves and masks are used during the preparation process and that workspaces are properly sanitized.

It is clear that AI is already radically altering how work gets done in and out of the kitchen. Fearmongers often tout AIs ability to automate processes and make better decisions in faster time compared to humans, but restaurants that deploy it mainly to displace employees will see only short-term productivity gains.

The perfect blend of AI collaboration needs workers to focus on the tasks where they excel, like customer service, so that the human element of the experience is never lost, only augmented.

AI on the Back-End

Ask any store or shift manager how they feel about workforce scheduling, and almost none will say its their favorite part of the job. Its a Catch-22: even when its done, its never perfect. However, when AI is in charge, everything looks different.

Parameters such as roles in the restaurants, peak days and hours, special events such as a Presidential debate, overtime, seniority, skills, days-off and more can be easily tracked. Managers are not only saving time in handing off this daunting task, but also allowing the best decisions to be made for optimal restaurant efficiency.

Another aspect is order prioritizationby nature, most kitchens and restaurants prepare meals based on FIFO (first-in-first-out). When using AI that enhances kitchen prioritization, for example, cooks are informed when to cook an order, ensuring that there are actually drivers available to deliver it to the customer in a timely manner.

Delivery management then allows drivers to make more deliveries per hour just by following the systems decisions, which improve and optimize the dispatching functionality.

The Birth of the Pandemic Intelligent Kitchen/Store

With the pandemic, our awareness of sanitation and cleanliness went dramatically up and the demand for solutions came with it. AI cameras give customers exactly thata real-time, never-before-seen view inside the kitchen to monitor how their order is being prepped, managed, and delivered.

Another aspect where AI comes in handy is avoiding dine-in and doing more take-out and drive-thru. When a customer is making an order online and picking the order up in their car, an AI camera can detect the car plate number in addition to the customer location (phone GPS) when entering the drive-thru area to provide a faster service with a runner from the restaurant.

In addition, the new concept of contactless menus where the whole menu is online with a quick scan of a QR code is another element building popularity during the pandemic. The benefits go beyond minimizing contact with physical menus; when a restaurant implements a smart online menu, they can collect data and offer personalized suggestions based on customers favorite foods, food/drink combos, weather-based food recommendations, upsell, cross-sell personalized etc.all powered by AI.

Restaurants can no Longer Afford Aversion to Technology

Challenges associated with technology, including implementation and a long-roadmap, are fading awaymost technology providers are offering Plug & Play products or services, and most of them are working on a SaaS model. This means theres no commitment, they are easy to use, and integrate seamlessly with the POS.

Restaurants dont have to make a big investment to reap the benefits technology bringstaking little steps that slowly improve restaurant operations and customer experience can still lead to increased growth and higher profit margins, especially during the pandemic when money is tight.

Technology enhances the experience, giving consumers a reason to keep ordering from their favorite places at a time when the stakes have never been so high, and the competition has never been as fierce. The pandemic is far from over but the changes we are seeing will be here for a lifetime. Thats why it is so important to leverage technology and AI now in order to see improvements in customer satisfaction and restaurant efficiency in the long term.

See the article here:
Why Artificial Intelligence Should Be on the Menu this Season - FSR magazine

Machine learning with less than one example – TechTalks

Less-than-one-shot learning enables machine learning algorithms to classify N labels with less than N training examples.

This article is part of ourreviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

If I told you to imagine something between a horse and a birdsay, a flying horsewould you need to see a concrete example? Such a creature does not exist, but nothing prevents us from using our imagination to create one: the Pegasus.

The human mind has all kinds of mechanisms to create new concepts by combining abstract and concrete knowledge it has of the real world. We can imagine existing things that we might have never seen (a horse with a long necka giraffe), as well as things that do not exist in real life (a winged serpent that breathes firea dragon). This cognitive flexibility allows us to learn new things with few and sometimes no new examples.

In contrast, machine learning and deep learning, the current leading fields of artificial intelligence, are known to require many examples to learn new tasks, even when they are related to things they already know.

Overcoming this challenge has led to a host of research work and innovation in machine learning. And although we are still far from creating artificial intelligence that can replicate the brains capacity for understanding, the progress in the field is remarkable.

For instance, transfer learning is a technique that enables developers to finetune an artificial neural network for a new task without the need for many training examples. Few-shot and one-shot learning enable a machine learning model trained on one task to perform a related task with a single or very few new examples. For instance, if you have an image classifier trained to detect volleyballs and soccer balls, you can use one-shot learning to add basketball to the list of classes it can detect.

A new technique dubbed less-than-one-shot learning (or LO-shot learning), recently developed by AI scientists at the University of Waterloo, takes one-shot learning to the next level. The idea behind LO-shot learning is that to train a machine learning model to detect M classes, you need less than one sample per class. The technique, introduced in a paper published in the arXiv preprocessor, is still in its early stages but shows promise and can be useful in various scenarios where there is not enough data or too many classes.

The LO-shot learning technique proposed by the researchers applies to the k-nearest neighbors machine learning algorithm. K-NN can be used for both classification (determining the category of an input) or regression (predicting the outcome of an input) tasks. But for the sake of this discussion, well still to classification.

As the name implies, k-NN classifies input data by comparing it to its k nearest neighbors (k is an adjustable parameter). Say you want to create a k-NN machine learning model that classifies hand-written digits. First you provide it with a set of labeled images of digits. Then, when you provide the model with a new, unlabeled image, it will determine its class by looking at its nearest neighbors.

For instance, if you set k to 5, the machine learning model will find the five most similar digit photos for each new input. If, say three of them belong to the class 7, it will classify the image as the digit seven.

k-NN is an instance-based machine learning algorithm. As you provide it with more labeled examples of each class, its accuracy improves but its performance degrades, because each new sample adds new comparisons operations.

In their LO-shot learning paper, the researchers showed that you can achieve accurate results with k-NN while providing fewer examples than there are classes. We propose less than one-shot learning (LO-shot learning), a setting where a model must learn N new classes given only M < N examples, less than one example per class, the AI researchers write. At first glance, this appears to be an impossible task, but we both theoretically and empirically demonstrate feasibility.

The classic k-NN algorithm provides hard labels, which means for every input, it provides exactly one class to which it belongs. Soft labels, on the other hand, provide the probability that an input belongs to each of the output classes (e.g., theres a 20% chance its a 2, 70% chance its a 5, and a 10% chance its a 3).

In their work, the AI researchers at the University of Waterloo explored whether they could use soft labels to generalize the capabilities of the k-NN algorithm. The proposition of LO-shot learning is that soft label prototypes should allow the machine learning model to classify N classes with less than N labeled instances.

The technique builds on previous work the researchers had done on soft labels and data distillation. Dataset distillation is a process for producing small synthetic datasets that train models to the same accuracy as training them on the full training set, Ilia Sucholutsky, co-author of the paper, told TechTalks. Before soft labels, dataset distillation was able to represent datasets like MNIST using as few as one example per class. I realized that adding soft labels meant I could actually represent MNIST using less than one example per class.

MNIST is a database of images of handwritten digits often used in training and testing machine learning models. Sucholutsky and his colleague Matthias Schonlau managed to achieve above-90 percent accuracy on MNIST with just five synthetic examples on the convolutional neural network LeNet.

That result really surprised me, and its what got me thinking more broadly about this LO-shot learning setting, Sucholutsky said.

Basically, LO-shot uses soft labels to create new classes by partitioning the space between existing classes.

In the example above, there are two instances to tune the machine learning model (shown with black dots). A classic k-NN algorithm would split the space between the two dots between the two classes. But the soft-label prototype k-NN (SLaPkNN) algorithm, as the OL-shot learning model is called, creates a new space between the two classes (the green area), which represents a new label (think horse with wings). Here we have achieved N classes with N-1 samples.

In the paper, the researchers show that LO-shot learning can be scaled up to detect 3N-2 classes using N labels and even beyond.

In their experiments, Sucholutsky and Schonlau found that with the right configurations for the soft labels, LO-shot machine learning can provide reliable results even when you have noisy data.

I think LO-shot learning can be made to work from other sources of information as wellsimilar to how many zero-shot learning methods dobut soft labels are the most straightforward approach, Sucholutsky said, adding that there are already several methods that can find the right soft labels for LO-shot machine learning.

While the paper displays the power of LO-shot learning with the k-NN classifier, Sucholutsky says the technique applies to other machine learning algorithms as well. The analysis in the paper focuses specifically on k-NN just because its easier to analyze, but it should work for any classification model that can make use of soft labels, Sucholutsky said. The researchers will soon release a more comprehensive paper that shows the application of LO-shot learning to deep learning models.

For instance-based algorithms like k-NN, the efficiency improvement of LO-shot learning is quite large, especially for datasets with a large number of classes, Susholutsky said. More broadly, LO-shot learning is useful in any kind of setting where a classification algorithm is applied to a dataset with a large number of classes, especially if there are few, or no, examples available for some classes. Basically, most settings where zero-shot learning or few-shot learning are useful, LO-shot learning can also be useful.

For instance, a computer vision system that must identify thousands of objects from images and video frames can benefit from this machine learning technique, especially if there are no examples available for some of the objects. Another application would be to tasks that naturally have soft-label information, like natural language processing systems that perform sentiment analysis (e.g., a sentence can be both sad and angry simultaneously).

In their paper, the researchers describe less than one-shot learning as a viable new direction in machine learning research.

We believe that creating a soft-label prototype generation algorithm that specifically optimizes prototypes for LO-shot learning is an important next step in exploring this area, they write.

Soft labels have been explored in several settings before. Whats new here is the extreme setting in which we explore them, Susholutsky said.I think it just wasnt a directly obvious idea that there is another regime hiding between one-shot and zero-shot learning.

Continue reading here:
Machine learning with less than one example - TechTalks

Inside the Army’s futuristic test of its battlefield artificial intelligence in the desert – C4ISRNet

YUMA PROVING GROUND, Ariz. After weeks of work in the oppressive Arizona desert heat, the U.S. Army carried out a series of live fire engagements Sept. 23 at Yuma Proving Ground to show how artificial intelligence systems can work together to automatically detect threats, deliver targeting data and recommend weapons responses at blazing speeds.

Set in the year 2035, the engagements were the culmination of Project Convergence 2020, the first in a series of annual demonstrations utilizing next generation AI, network and software capabilities to show how the Army wants to fight in the future.

The Army was able to use a chain of artificial intelligence, software platforms and autonomous systems to take sensor data from all domains, transform it into targeting information, and select the best weapon system to respond to any given threat in just seconds.

Army officials claimed that these AI and autonomous capabilities have shorted the sensor to shooter timeline the time it takes from when sensor data is collected to when a weapon system is ordered to engaged from 20 minutes to 20 seconds, depending on the quality of the network and the number of hops between where its collected and its destination.

We use artificial intelligence and machine learning in several ways out here, Brigadier General Ross Coffman, director of the Army Futures Commands Next Generation Combat Vehicle Cross-Functional Team, told visiting media.

We used artificial intelligence to autonomously conduct ground reconnaissance, employ sensors and then passed that information back. We used artificial intelligence and aided target recognition and machine learning to train algorithms on identification of various types of enemy forces. So, it was prevalent throughout the last six weeks.

Promethean Fire

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

The first exercise featured is informative of how the Army stacked together AI capabilities to automate the sensor to shooter pipeline. In that example, the Army used space-based sensors operating in low Earth orbit to take images of the battleground. Those images were downlinked to a TITAN ground station surrogate located at Joint Base Lewis McCord in Washington, where they were processed and fused by a new system called Prometheus.

Currently under development, Prometheus is an AI system that takes the sensor data ingested by TITAN, fuses it, and identifies targets. The Army received its first Prometheus capability in 2019, although its targeting accuracy is still improving, according to one Army official at Project Convergence. In some engagements, operators were able to send in a drone to confirm potential threats identified by Prometheus.

From there, the targeting data was delivered to a Tactical Assault Kit a software program that gives operators an overhead view of the battlefield populated with both blue and red forces. As new threats are identified by Prometheus or other systems, that data is automatically entered into the program to show users their location. Specific images and live feeds can be pulled up in the environment as needed.

All of that takes place in just seconds.

Once the Army has its target, it needs to determine the best response. Enter the real star of the show: the FIRES Synchronization to Optimize Responses in Multi-Domain Operations, or FIRESTORM.

What is FIRESTORM? Simply put its a computer brain that recommends the best shooter, updates the common operating picture with the current enemy situation, and friendly situation, admissions the effectors that we want to eradicate the enemy on the battlefield, said Coffman.

Army leaders were effusive in praising FIRESTORM throughout Project Convergence. The AI system works within the Tactical Assault Kit. Once new threats are entered into the program, FIRESTORM processes the terrain, available weapons, proximity, number of other threats and more to determine what the best firing system to respond to that given threat. Operators can assess and follow through with the systems recommendations with just a few clicks of the mouse, sending orders to soldiers or weapons systems within seconds of identifying a threat.

Just as important, FIRESTORM provides critical target deconfliction, ensuring that multiple weapons systems arent redundantly firing on the same threat. Right now, that sort of deconfliction would have to take place over a phone call between operators. FIRESTORM speeds up that process and eliminates any potential misunderstandings.

In that first engagement, FIRESTORM recommended the use of an Extended-Range Cannon Artillery. Operators approved the algorithms choice, and promptly the cannon fired a projectile at the target located 40 kilometers away. The process from identifying the target to sending those orders happened faster than it took the projectile to reach the target.

Perhaps most surprising is how quickly FIRESTORM was integrated into Project Convergence.

This computer program has been worked on in New Jersey for a couple years. Its not a program of record. This is something that they brought to my attention in July of last year, but it needed a little bit of work. So we put effort, we put scientists and we put some money against it, said Coffman. The way we used it is as enemy targets were identified on the battlefield FIRESTORM quickly paired those targets with the best shooter in position to put effects on it. This is happening faster than any human could execute. It is absolutely an amazing technology.

Dead Center

Prometheus and FIRESTORM werent the only AI capabilities on display at Project Convergence.

In other scenarios, a MQ-1C Gray Eagle drone was able to identify and target a threat using the on-board Dead Center payload. With Dead Center, the Gray Eagle was able to process the sensor data it was collecting, identifying a threat on its own without having to send the raw data back to a command post for processing and target identification. The drone was also equipped with the Maven Smart System and Algorithmic Inference Platform, a product created by Project Maven, a major Department of Defense effort to use AI for processing full motion video.

According to one Army officer, the capabilities of the Maven Smart System and Dead Center overlap, but placing both on the modified Gray Eagle at Project Convergence helped them to see how they compared.

With all of the AI engagements, the Army ensured there was a human in the loop to provide oversight of the algorithms' recommendations. When asked how the Army was implementing the Department of Defenses principles of ethical AI use adopted earlier this year, Coffman pointed to the human barrier between AI systems and lethal decisions.

So obviously the technology exists, to remove the human right the technology exists, but the United States Army, an ethical based organization thats not going to remove a human from the loop to make decisions of life or death on the battlefield, right? We understand that, explained Coffman. The artificial intelligence identified geo-located enemy targets. A human then said, Yes, we want to shoot at that target.

Originally posted here:
Inside the Army's futuristic test of its battlefield artificial intelligence in the desert - C4ISRNet

Artificial intelligence: threats and opportunities | News – EU News

The increasing reliance on AI systems also poses potential risks.

Underuse of AI is considered as a major threat: missed opportunities for the EU could mean poor implementation of major programmes, such as the EU Green Deal, losing competitive advantage towards other parts of the world, economic stagnation and poorer possibilities for people. Underuse could derive from public and business' mistrust in AI, poor infrastructure, lack of initiative, low investments, or, since AI's machine learning is dependent on data, from fragmented digital markets.

Overuse can also be problematic: investing in AI applications that prove not to be useful or applying AI to tasks for which it is not suited, for example using it to explain complex societal issues.

An important challenge is to determine who is responsible for damage caused by an AI-operated device or service: in an accident involving a self-driving car. Should the damage be covered by the owner, the car manufacturer or the programmer?

If the producer was absolutely free of accountability, there might be no incentive to provide good product or service and it could damage peoples trust in the technology; but regulations could also be too strict and stifle innovation.

The results that AI produces depend on how it is designed and what data it uses. Both design and data can be intentionally or unintentionally biased. For example, some important aspects of an issue might not be programmed into the algorithm or might be programmed to reflect and replicate structural biases. In adcition, the use of numbers to represent complex social reality could make the AI seem factual and precise when it isnt . This is sometimes referred to as mathwashing.

If not done properly, AI could lead to decisions influenced by data on ethnicity, sex, age when hiring or firing, offering loans, or even in criminal proceedings.

AI could severely affect the right to privacy and data protection. It can be for example used in face recognition equipment or for online tracking and profiling of individuals. In addition, AI enables merging pieces of information a person has given into new data, which can lead to results the person would not expect.

It can also present a threat to democracy; AI has already been blamed for creating online echo chambers based on a person's previous online behaviour, displaying only content a person would like, instead of creating an environment for pluralistic, equally accessible and inclusive public debate. It can even be used to create extremely realistic fake video, audio and images, known as deepfakes, which can present financial risks, harm reputation, and challenge decision making. All of this could lead to separation and polarisation in the public sphere and manipulate elections.

AI could also play a role in harming freedom of assembly and protest as it could track and profile individuals linked to certain beliefs or actions.

Use of AI in the workplace is expected to result in the elimination of a large number of jobs. Though AI is also expected to create and make better jobs, education and training will have a crucial role in preventing long-term unemployment and ensure a skilled workforce.

Read more:
Artificial intelligence: threats and opportunities | News - EU News

The Army just conducted a massive test of its battlefield artificial intelligence in the desert – DefenseNews.com

YUMA PROVING GROUND, Ariz. After weeks of work in the oppressive Arizona desert heat, the U.S. Army carried out a series of live fire engagements Sept. 23 at Yuma Proving Ground to show how artificial intelligence systems can work together to automatically detect threats, deliver targeting data and recommend weapons responses at blazing speeds.

Set in the year 2035, the engagements were the culmination of Project Convergence 2020, the first in a series of annual demonstrations utilizing next generation AI, network and software capabilities to show how the Army wants to fight in the future.

The Army was able to use a chain of artificial intelligence, software platforms and autonomous systems to take sensor data from all domains, transform it into targeting information, and select the best weapon system to respond to any given threat in just seconds.

Army officials claimed that these AI and autonomous capabilities have shorted the sensor to shooter timeline the time it takes from when sensor data is collected to when a weapon system is ordered to engaged from 20 minutes to 20 seconds, depending on the quality of the network and the number of hops between where its collected and its destination.

We use artificial intelligence and machine learning in several ways out here, Brigadier General Ross Coffman, director of the Army Futures Commands Next Generation Combat Vehicle Cross-Functional Team, told visiting media.

We used artificial intelligence to autonomously conduct ground reconnaissance, employ sensors and then passed that information back. We used artificial intelligence and aided target recognition and machine learning to train algorithms on identification of various types of enemy forces. So, it was prevalent throughout the last six weeks.

Sign up for our Training & Sim Report Get the latest news in training and simulation technologies

Subscribe

Enter a valid email address (please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Thanks for signing up!

By giving us your email, you are opting in to the Early Bird Brief.

The first exercise featured is informative of how the Army stacked together AI capabilities to automate the sensor to shooter pipeline. In that example, the Army used space-based sensors operating in low Earth orbit to take images of the battleground. Those images were downlinked to a TITAN ground station surrogate located at Joint Base Lewis McCord in Washington, where they were processed and fused by a new system called Prometheus.

Currently under development, Prometheus is an AI system that takes the sensor data ingested by TITAN, fuses it, and identifies targets. The Army received its first Prometheus capability in 2019, although its targeting accuracy is still improving, according to one Army official at Project Convergence. In some engagements, operators were able to send in a drone to confirm potential threats identified by Prometheus.

From there, the targeting data was delivered to a Tactical Assault Kit a software program that gives operators an overhead view of the battlefield populated with both blue and red forces. As new threats are identified by Prometheus or other systems, that data is automatically entered into the program to show users their location. Specific images and live feeds can be pulled up in the environment as needed.

All of that takes place in just seconds.

Once the Army has its target, it needs to determine the best response. Enter the real star of the show: the FIRES Synchronization to Optimize Responses in Multi-Domain Operations, or FIRESTORM.

What is FIRESTORM? Simply put its a computer brain that recommends the best shooter, updates the common operating picture with the current enemy situation, and friendly situation, admissions the effectors that we want to eradicate the enemy on the battlefield, said Coffman.

Army leaders were effusive in praising FIRESTORM throughout Project Convergence. The AI system works within the Tactical Assault Kit. Once new threats are entered into the program, FIRESTORM processes the terrain, available weapons, proximity, number of other threats and more to determine what the best firing system to respond to that given threat. Operators can assess and follow through with the systems recommendations with just a few clicks of the mouse, sending orders to soldiers or weapons systems within seconds of identifying a threat.

Just as important, FIRESTORM provides critical target deconfliction, ensuring that multiple weapons systems arent redundantly firing on the same threat. Right now, that sort of deconfliction would have to take place over a phone call between operators. FIRESTORM speeds up that process and eliminates any potential misunderstandings.

In that first engagement, FIRESTORM recommended the use of an Extended-Range Cannon Artillery. Operators approved the algorithms choice, and promptly the cannon fired a projectile at the target located 40 kilometers away. The process from identifying the target to sending those orders happened faster than it took the projectile to reach the target.

Perhaps most surprising is how quickly FIRESTORM was integrated into Project Convergence.

This computer program has been worked on in New Jersey for a couple years. Its not a program of record. This is something that they brought to my attention in July of last year, but it needed a little bit of work. So we put effort, we put scientists and we put some money against it, said Coffman. The way we used it is as enemy targets were identified on the battlefield FIRESTORM quickly paired those targets with the best shooter in position to put effects on it. This is happening faster than any human could execute. It is absolutely an amazing technology.

Prometheus and FIRESTORM werent the only AI capabilities on display at Project Convergence.

In other scenarios, a MQ-1C Gray Eagle drone was able to identify and target a threat using the on-board Dead Center payload. With Dead Center, the Gray Eagle was able to process the sensor data it was collecting, identifying a threat on its own without having to send the raw data back to a command post for processing and target identification. The drone was also equipped with the Maven Smart System and Algorithmic Inference Platform, a product created by Project Maven, a major Department of Defense effort to use AI for processing full motion video.

According to one Army officer, the capabilities of the Maven Smart System and Dead Center overlap, but placing both on the modified Gray Eagle at Project Convergence helped them to see how they compared.

With all of the AI engagements, the Army ensured there was a human in the loop to provide oversight of the algorithms' recommendations. When asked how the Army was implementing the Department of Defenses principles of ethical AI use adopted earlier this year, Coffman pointed to the human barrier between AI systems and lethal decisions.

So obviously the technology exists, to remove the human right the technology exists, but the United States Army, an ethical based organization thats not going to remove a human from the loop to make decisions of life or death on the battlefield, right? We understand that, explained Coffman. The artificial intelligence identified geo-located enemy targets. A human then said, Yes, we want to shoot at that target.

The rest is here:
The Army just conducted a massive test of its battlefield artificial intelligence in the desert - DefenseNews.com

Regina Barzilay wins $1M Association for the Advancement of Artificial Intelligence Squirrel AI award – MIT News

For more than 100 years Nobel Prizes have been given out annually to recognize breakthrough achievements in chemistry, literature, medicine, peace, and physics. As these disciplines undoubtedly continue to impact society, newer fields like artificial intelligence (AI) and robotics have also begun to profoundly reshape the world.

In recognition of this, the worlds largest AI society the Association for the Advancement of Artificial Intelligence (AAAI) announced today the winner of their new Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity, a$1 million award given to honor individuals whose work in the field has had a transformative impact on society.

The recipient, Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science at MIT and a member of MITs Computer Science and Artificial Intelligence Laboratory (CSAIL), is being recognized for her work developing machine learning models to develop antibiotics and other drugs, and to detect and diagnose breast cancer at early stages.

In February, AAAI will officially present Barzilay with the award, which comes with an associated prize of $1 million provided by the online education company Squirrel AI.

Only world-renowned recognitions, such as the Association of Computing Machinerys A.M. Turing Award and the Nobel Prize, carry monetary rewards at the million-dollar level, says AAAI awards committee chair Yolanda Gil. This award aims to be unique in recognizing the positive impact of artificial intelligence for humanity.

Barzilay has conducted research on a range of topics in computer science, ranging from explainable machine learning to deciphering dead languages. Since surviving breast cancer in 2014, she has increasingly focused her efforts on health care. She created algorithms for early breast cancer diagnosis and risk assessment that have been tested at multiple hospitals around the globe, including in Sweden, Taiwan, and at Bostons Massachusetts General Hospital. She is now working with breast cancer organizations such as Institute Protea in Brazil to make her diagnostic tools available for underprivileged populations around the world. (She realized from doing her work that, if a system like hers had existed at the time, her doctors actually could have detected her cancer two or three years earlier.)

In parallel, she has been working on developing machine learning models for drug discovery: with collaborators shes created models for selecting molecule candidates for therapeutics that have been able to speed up drug development, and last year helped discover a new antibiotic called Halicin that was shown to be able to kill many species of disease-causing bacteria that are antibiotic-resistant, including Acinetobacter baumannii and clostridium difficile (c-diff).

Through my own life experience, I came to realize that we can create technology that can alleviate human suffering and change our understanding of diseases, says Barzilay, who is also a member of the Koch Institute for Integrative Cancer Research. I feel lucky to have found collaborators who share my passion and who have helped me realize this vision.

Barzilay also serves as a member of MITs Institute for Medical Engineering and Science, and as faculty co-lead for MITs Abdul Latif Jameel Clinic for Machine Learning in Health. One of the Jameel Clinics most recent efforts is AI Cures, a cross-institutional initiative focused on developing affordable Covid-19 antivirals.

Regina has made truly-changing breakthroughs in imaging breast cancer and predicting the medicinal activity of novel chemicals, says MIT professor of biology Phillip Sharp, a Nobel laureate who has served as director of both the McGovern Institute for Brain Research and the MIT Center for Cancer Research, predecessor to the Koch Institute. I am honored to have as a colleague someone who is such a pioneer in using deeply creative machine learning methods to transform the fields of health care and biological science.

Barzilay joined the MIT faculty in 2003 after earning her undergraduate at Ben-Gurion University of the Negev, Israel and her PhD at Columbia University. She is also the recipient of a MacArthur genius grant, the National Science Foundation Career Award, a Microsoft Faculty Fellowship, multiple best paper awards in her field, and MITs Jamieson Award for excellence in teaching.

"We believe AI advances will benefit a great many fields, from health care and education to smart cities and the environment," says Derek Li, founder and chairman of Squirrel AI. We believe that Dr. Barzilay and other future awardees will inspire the AI community to continue to contribute to and advance AIs impact on the world.

AAAIs Gil says the organization was very excited to partner with Squirrel AI for this new award to recognize the positive impacts of artificial intelligence to protect, enhance, and improve human life in meaningful ways. With more than 300 elected fellows and 6,000 members from 50 countries across the globe, AAAI is the worlds largest scientific society devoted to artificial intelligence. Its officers have included many AI pioneers, including Allen Newell and John McCarthy. AAAI confers several influential AI awards including the Feigenbaum Prize, the Newell Award (jointly with ACM), and the Engelmore Award.

Regina has been a trailblazer in the field of health care AI by asking the important questions about how we can use machine learning to treat and diagnose diseases, says Daniela Rus, director of CSAIL and the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science. She has been both a brilliant researcher and a devoted educator, and all of us at CSAIL are so inspired by her work and proud to have her as a colleague.

See the rest here:
Regina Barzilay wins $1M Association for the Advancement of Artificial Intelligence Squirrel AI award - MIT News

Artificial Intelligence in the C-Store – CSPDailyNews.com

CHICAGO Artificial intelligence (AI) is another important component of modern loyalty, says Sastry Penumarthy, co-founder and vice president of strategy for Punchh, a loyalty firm based in San Mateo, Calif. Humans do not have the time or ability to sift through waves of data to know when and how to communicate and incentivize specific customers, and AI can make that process more seamless, and more personalized.

AI will basically let me predict the customers that will be of the highest value to me over the next three months, let me look at all the offers and let me find the right offer for each of these customers, says Penumarthy. He says Punchh is launching an AI-driven program like that with select customers over the next few months.

AI also helps companies identify and craft offers for customers who are visiting or purchasing less vs. the highest-value customers. It can identify the best time of day or day of the week to send an offer or a message to a loyalty member. So, for example, if you are in the habit of checking email at lunchtime, its useful to know that information, But I shouldnt ask you what you prefer; I learn from your behavior, he says.

Related: Loyalty, This Time Its Personal

But the most important reason to lean on AI, says Penumarthy, is its ability to connect consumer data collected through the loyalty program to other systems, such as marketing.

Penumarthy says Punchh is working on a way to extend and automate relationships between retailers, consumers and consumer packaged goods (CPG) brands.

Today, when retailers partner with CPG brands for loyalty promotions, it can take two to three months for the retailer to understand the effectiveness of the program and report back to the CPG company. With AI, both the CPG company and the retailer have a better idea of how much money and time to invest into the program.

Penumarthy says loyalty used to be much more conventional, but times have changed. People no longer use the same land line phone numbers they used to sign onto these programs years ago. Many customers no longer carry newspaper coupon clippings when shopping at the grocery store. I want the offers to be right there on the phone when Im placing the order. And they cant do that unless its personalized, says Penumarthy.

Jones of Caseys, which has partnered with Punchh, says understanding customer shopping patterns using data mining and AI is important, and that such tools will be a part of Caseys offer creation and targeting capabilities in the future.

More: 3 Loyalty Developments Driven by the Pandemic

Get todays need-to-know convenience industry intelligence. Sign up to receive texts from CSP on news and insights that matter to your brand.

View post:
Artificial Intelligence in the C-Store - CSPDailyNews.com