Industry VoicesAI doesn’t have to replace doctors to produce better health outcomes – FierceHealthcare

Americans encounter some form of artificial intelligence and machine learning technologies in nearly every aspect of daily life: We accept Netflixs recommendations on what movie we should stream next, enjoy Spotifys curated playlists and take a detour when Waze tells us we can shave eight minutes off of our commute.

And it turns out that were fairly comfortable with this new normal: A survey released last year by Innovative Technology Solutions found that, on a scale of 1 to 10, Americans give their GPS systems an 8.1 trust and satisfaction score, followed closely by a 7.5 for TV and movie streaming services.

But when it comes to higher stakes, were not so trusting. When asked about whether they trust an AI doctor diagnosing or treating a medical issue, respondents scored it just a 5.4.

CMS Doubles Down on CAHPS and Raises the Bar on Member Experience

A new CMS final rule will double the impact of CAHPS and member experience on a Medicare plans overall Star Rating. Learn more and discover how to exceed member expectations and improve Star Ratings in this new whitepaper.

Overall skepticism about medical AI and ML is nothing new. In 2012, we were told that IBMs AI-powered Watson was being trained to recommend treatments for cancer patients. There were claims that the advanced technology could make medicine personalized and tailored to millions of people living with cancer. But in 2018, reports surfaced that indicated the research and technology had fallen short of expectations, leaving users to speculate the accuracy of Watsons predictive analytics.

RELATED:Investors poured $4B into healthcare AI startups in 2019

Patients have been reluctant to trust medical AI and ML out of fear that the technology would not offer a unique or personalized recommendation based on individual needs. A piece in Harvard Business Review in 2019 referenced a survey in which 200 business students were asked to take a free health assessment to perform a diagnosis40% of students signed up for the assessment when told their doctor would perform the diagnosis, while only 26% signed up when told a computer would perform the diagnosis.

These concerns are not without basis. Many of the AI and ML approaches that are being used in healthcare todaydue to simplicity and ease of implementationstrive for performance at the population-level by fitting to the characteristics most common among patients. They look to do well in the general case, failing to serve large groups of patients and individuals with unique health needs. However, this limitation of how AI and ML is being applied is not a limitation of the technology.

If anything, what makes AI and ML exceptionalif done rightis its ability to process huge sets of data comprising a diversity of patients, providers, diseases and outcomes and model the fine-grained trends that could potentially have a lasting impact on a patients diagnosis or treatment options. This ability to use data in the large for representative populations and to obtain inferences in the small for individual-level decision support is the promise of AI and ML. The whole process might sound impersonal or cookie-cutter, but the reality is that the advancements in precision medicine and delivery will make care decisions more data-driven and thus more exact.

Consider a patient choosing a specialist. Its anything but data-driven: Theyll search for a provider in-network or maybe one that is conveniently located, without understanding potential health outcomes as a result of their choice. The issue is that patients lack the proper data and information they need to make these informed choices.

RELATED:The unexpected ways AI is impacting the delivery of care, including for COVID-19

Thats where machine intelligence comes into playan AI/ML model that is able to accurately predict the right treatment, at the right time, by the right provider for a patient, which could drastically help reduce the rate of hospitalizations and emergency room visits.

As an example, research published last month in AJMC looked at claims data from 2 million Medicare beneficiaries between 2017 and 2019 to evaluate the utility of ML in the management of severe respiratory infections in community and post-acute settings. The researchers found that machine intelligence for precision navigation could be used to mitigate infection rates in the post-acute care setting.

Specifically, at-risk individuals who received care at skilled nursing facilities (SNFs) that the technology predicted would be the best choice for them had a relative reduction of 37% for emergent care and 36% for inpatient hospitalizations due to respiratory infections compared to those who received care at non-recommended SNFs.

This advanced technology has the ability to comb through and analyze an individuals treatment needs and medical history so that the most accurate recommendations can be made based on that individuals personalized needs and the doctors or facilities available to them. In turn, matching a patient to the optimal provider has the ability to drastically improve health outcomes while also lowering the cost of care.

We now have the technology where we can use machine intelligence to optimize some of the most important decisions in healthcare. The data show results we can trust.

Zeeshan Syed is the CEO and Zahoor Elahi is the COO of Health at Scale.

Read the original post:
Industry VoicesAI doesn't have to replace doctors to produce better health outcomes - FierceHealthcare

The Future of Military Applications of Artificial Intelligence: A Role for Confidence-Building Measures? – Foreign Policy Research Institute

Access the Orbis Fall 2020 issue here

As militaries around the world seek to gain a strategic edge over their adversaries by integrating artificial intelligence (AI) innovations into their arsenals, how can members of the international community effectively reduce the unforeseen risks of this technological competition? We argue that pursuing confidence-building measures (CBMs), a class of information-sharing and transparency-enhancing arrangements that states began using in the Cold War to enhance strategic stability, could offer one model of managing AI-related risk today. Analyzing the conditions that led to early CBMs suggests such measures, however, will unlikely succeed today without being adapted to current conditions. This article uses historical analogies to illustrate how, in the absence of combat experiences involving novel military technology, it is difficult for states to be certain how these innovations change the implicit rules ofwarfare. Pursuing international dialogue, in ways that borrow from the Cold War CBM toolkit, may help speed the learning process about the implications of military applications of AI in ways that reduce the risk that states uncertainty about changes in military technology undermine international security and stability.

Access the article here

Read the original:
The Future of Military Applications of Artificial Intelligence: A Role for Confidence-Building Measures? - Foreign Policy Research Institute

Patent application strategies in the field of artificial intelligence based on examination standards – Lexology

I. Introduction

Artificial Intelligence (AI) refers to an intelligence technology similar to human implemented by means of ordinary computer programs. With rapid development of artificial intelligence technology and continuous reflection of commercial values thereof, patent applications related to artificial intelligence technology have become a hot field in patent applications, and the number of applications is continuously rising, and scopes of application fields are also expanding.

This article aims attempts to provide some patent application strategies in the field of artificial intelligence based on latest examination standards in China, and summarize similarities and differences between examination standards in the field of artificial intelligence in China, Japan, Korea, US and Europe, for reference by patent applicants, and patent attorneys, etc.

II. Main laws involved and coping strategies

In China, as a patent application involving a computer program, the primary examination focus of a patent application in the field of artificial intelligence is whether the patent application is an eligible object protected by a patent, and another examination focus is the inventiveness as provided in Article 22, Paragraph 3 of the Chinese Patent Law.

Figure 1

Figure 1 shows the general examination process of a patent application in the field of artificial intelligence in China.

For a patent application in the field of artificial intelligence, it may be drafted as product claim or method claim, and the product claim may be drafted as an eligible subject, such as a system, a device, and a storage medium, etc.

Table 1 Forms of drafting of claims

Following description mainly focuses on analysis of latest examination standards of China and coping strategies regarding whether a patent application in the field of artificial intelligence belongs to an eligible object protected by a patent and whether it is in conformity with the provisions of inventiveness.

1. Examination standards and coping strategies regarding an eligible object protected by a patent

1.1 The latest examination standards on eligible object issues

It is provided in Article 25, Paragraph 1, Item (2) of the Chinese Patent Law that no patent right shall be granted for rules and methods for mental activities.

It is provided in the newly-amended Guidelines for Examination that if a claim contains a technical feature in addition to an algorithm feature or a commercial rule and a method feature, the claim as a whole is not a rule and method of an intellectual activity, and a possibility that it is granted a patent right shall not be excluded in accordance with Article 25, Paragraph 1, Item (2) of the Patent Law.

Moreover, it is provided in Rule 22, Paragraph 2 of the Implementing Regulations of the Chinese Patent Law that Invention as mentioned in the Patent Law means any new technical solution relating to a product, a process or an improvement thereof.

Correspondingly, it is provided in the newly-amended Guidelines for Examination that if steps involved in an algorithm in a claim reflect that they are closely related to the technical problem to be solved, for example, data processed by the algorithm are data having definite technical meanings in the technical field, execution of the algorithm is able to directly reflect a process of solving a technical problem by using natural laws, and produces a technical effect, then in general, the solution defined in this claim belongs to the technical solution provided in Article 2, Paragraph 2 of the Patent Law.

1.2 Application strategy for eligible object issues

Patent applications in the field of artificial intelligence may basically be divided into two types according to their application scopes: basic type patent applications and applied type patent applications. The so-called basic type patent application refers to that an algorithm involved in the patent application may be widely used in multiple particular fields, and the applied patent application refers to that an algorithm involved in the patent application is mainly combined with a particular field, and is an application in this field.

Taking two aspects into account, i.e. patent protection scope and conformity to examination requirements, ways of drafting the two types of patent applications are proposed for reference.

Table 2 Ways of drafting two types of patent applications

In addition, due to the development of Internet technology and big data technology, the artificial intelligence technology is also increasingly used in commercial and financial fields. In making an application for this type of patent, attention should be paid to combining a business rule, an algorithm feature and a technical feature in description.

Moreover, based on a stage of technological improvement, a patent application in the field of artificial intelligence may be divided into two stages: a training stage (learning stage) and an application stage. Following are corresponding ways of drafting.

Table 3 eligible subjects in two stages

2. Examination standard and coping strategy regarding inventiveness

2.1 Latest examination standards regarding inventiveness

It is provided in the newly-amended Guidelines for Examination that when examination regarding inventiveness is conducted on an application for patent for invention containing a technical feature and an algorithm feature, or a business rule and a method feature, the algorithm feature or the business rule and the method feature shall be taken into account together with the technical feature as a whole, when they functionally and mutually support the technical feature and have an interaction relationship between them and the technical feature.

2.2 Application strategy for examination on inventiveness

Based on the above examination standards, when an application for patent in the field of artificial intelligence is drafted, attention should be paid to combine an algorithm feature and a technical feature in describing the technical solution. Moreover, in describing a technical problem and a technical effect, emphasis should be placed on that the algorithm feature and the technical feature are specifically combined, and jointly solve the technical problem and produce a corresponding technical effect.

Furthermore, for some artificial intelligence patent applications not involved in improvement of a basic algorithm, their improvement points relative to existing technologies may mainly exist in application of an algorithm, such as a neural network, to a specific field, while the neural network itself is not changed much. For this type of patent applications, inventiveness may be considered mainly based on the following two aspects: first, whether the technical fields are similar; and second, a difficulty of applying the neural network to the technical field of the present application and whether a technical effect different from that in the original technical field is produced.

III. Comparison of examination standards of China, Japan, Korea, US and Europe

1. Comparison of examination standards of an eligible object protected by a patent

Comparisons of examination standards of an eligible object protected by a patent in China, Japan, Korea, US and Europe is as follows.

Table 4 Examination of an eligible object protected by a patent

in China, Japan, Korea, US and Europe

2. Comparison of examination standards of inventiveness

Comparison of examination standards of inventiveness in China, Japan, Korea, US and Europe is as follows.

Table 5 Examination of inventiveness in China, Japan, Korea, US and Europe

IV. Summary Patent applications in the field of artificial intelligence belong to patent applications involving computer programs, which need to meet the universal requirements on patent applications involving computer programs. Due to the specialty of the artificial intelligence technology, for patent applications in the field of artificial intelligence, the China National Intellectual Property Administration (CNIPA) has formulated new special examination regulations. Drafting of patent applications and the responses to examination opinions based on the latest examination standards are beneficial to applicants in obtaining patent rights of relevant technologies in China.

In addition, understanding of examination standards for patent applications in the field of artificial intelligence in major patent countries and regions in the world, namely China, Japan, Korea, US and Europe, is advantageous to global application strategy formulation and reasonable patent layout of applicants.

Go here to read the rest:
Patent application strategies in the field of artificial intelligence based on examination standards - Lexology

Will artificial intelligence have a conscience? – TechTalks

Does artificial intelligence require moral values? We spoke to Patricia Churchland, neurophilosopher and author of Conscience: The Origins of Moral Intuition

This article is part of the philosophy of artificial intelligence, a series of posts that explore the ethical, moral, and social implications of AI today and in the future

Can artificial intelligence learn the moral values of human societies? Can an AI system make decisions in situations where it must weigh and balance between damage and benefits to different people or groups of people? Can AI develop a sense of right and wrong? In short, will artificial intelligence have a conscience?

This question might sound irrelevant when considering todays AI systems, which are only capable of accomplishing very narrow tasks. But as science continues to break new grounds, artificial intelligence is gradually finding its way into broader domains. Were already seeing AI algorithms applied to areas where the boundaries of good and bad decisions are not clearly defined, such as criminal justice and job application processing.

In the future, we expect AI to care for the elderly, teach our children, and perform many other tasks that require moral human judgement. And then, the question of conscience and conscientiousness in AI will become even more critical.

With these questions in mind, I went in search of a book (or books) that explained how humans develop conscience and give an idea of whether what we know about the brain provides a roadmap for conscientious AI.

A friend suggested Conscience: The Origins of Moral Intuitionby Dr. Patricia Churchland, neuroscientist, philosopher, and professor emerita at the University of California, San Diego. Dr. Churchlands book, and a conversation I had with her after reading Conscience, taught me a lot about the extent and limits of brain science. Conscience shows us how far weve come to understand the relation between the brains physical structure and workings and the moral sense in humans. But it also shows us how much more we must go to truly understand how humans make moral decisions.

It is a very accessible read for anyone who is interested in exploring the biological background of human conscience and reflect on the intersection of AI and conscience.

Heres a very quick rundown of what Conscience tells us about the development of moral intuition in the human brain. With the mind being the main blueprint for AI, better knowledge of conscience can tell us a lot about what it would take for AI to learn the moral norms of human societies.

Conscience is an individuals judgment about what is normally right or wrong, typically, but not always, reflecting some standard of a group to which the individual feels attached, Churchland writes in her book.

But how did humans develop the ability to understand to adopt these rights and wrongs? To answer that question, Dr. Churchland takes us back through time, when our first warm-blooded ancestors made their apparition.

Birds and mammals are endotherms: their bodies have mechanisms to preserve their heat. In contrast, in reptiles, fish, and insects, cold-blooded organisms, the body adapts to the temperature of the environment.

The great benefit of endothermy is the capability to gather food at night and to survive colder climates. The tradeoff: endothermic bodies need a lot more food to survive. This requirement led to a series of evolutionary steps in the brains of warm-blooded creatures that made them smarter. Most notable among them is the development of the cortex in the mammalian brain.

The cortex can integrate diverse signals and pull out abstract representation of events and things that are relevant to survival and reproduction. The cortex learns, integrates, revises, recalls, and keeps on learning.

The cortex allows mammals to be much more flexible to changes in weather and landscape, as opposed to insects and fish, who are very dependent on stability in their environmental conditions.

But again, learning capabilities come with a tradeoff: mammals are born helpless and vulnerable. Unlike snakes, turtles, and insects, which hit the ground running and are fully functional when they break their eggshells, mammals need time to learn and develop their survival skills.

And this is why they depend on each other for survival.

The brains of all living beings have a reward and punishment system that makes sure they do things that support their survival and the survival of their genes. The brains of mammals repurposed this function to adapt for sociality.

In the evolution of the mammalian brain, feelings of pleasure and pain supporting self-survival were supplemented and repurposed to motivate affiliative behavior, Churchland writes. Self-love extended into a related but new sphere: other-love.

The main beneficiary of this change are the offspring. Evolution has triggered changes in the circuitry of the brains of mammals to reward care for babies. Mothers, and in some species both parents, go to great lengths to protect and feed their offspring, often at a great disadvantage to themselves.

In Conscience, Churchland describes experiments on the biochemical reactions of the brains of different mammals reward social behavior, including care for offspring.

Mammalian sociality is qualitatively different from that seen in other social animals that lack a cortex, such as bees, termites, and fish, Churchland writes. It is more flexible, less reflexive, and more sensitive to contingencies in the environment and thus sensitive to evidence. It is sensitive to long-term as well as short-term considerations. The social brain of mammals enables them to navigate the social world, for knowing what others intend or expect.

The brains of humans have the largest and most complex cortex in mammals. The brain of homo sapiens, our species, is three times as large as that of chimpanzees, with whom we shared a common ancestor 5-8 million years ago.

The larger brain naturally makes us much smarter but also has higher energy requirements. So how did we come to pay the calorie bill? Learning to cook food over fire was quite likely the crucial behavioral change that allowed hominin brains to expand well beyond chimpanzee brains, and to expand rather quickly in evolutionary time, Churchland writes.

With the bodys energy needs supplied, hominins eventually became able to do more complex things, including the development of richer social behaviors and structures.

So the complex behavior we see in our species today, including the adherence to moral norms and rules, started off as a struggle for survival and the need to meet energy constraints.

Energy constrains might not be stylish and philosophical, but they are as real as rain, Churchland writes in Conscience.

Our genetic evolution favored social behavior. Moral norms emerged as practical solutions to our needs. And we humans, like every other living being, are subject to the laws of evolution, which Churchland describes as a blind process that, without any goal, fiddles around with the structure already in place. The structure of our brain is the result of countless experiments and adjustments.

Between them, the circuitry supporting sociality and self-care, and the circuitry for internalizing social norms, create what we call conscience, Churchland writes. In this sense your conscience is a brain construct, whereby your instincts for caring, for self and others, are channeled into specific behaviors through development, imitation, and learning.

This is a very sensitive topic and complicated, and despite all the advances in brain science, many of the mysteries of the human mind and behavior remain unlocked.

The dominant role of energy requirements in the ancient origin of human morality does not mean that decency and honesty must be cheapened. Nor does it mean that they are not real. These virtues remain entirely admirable and worthy to us social humans, regardless of their humble origins. They are an essential part of what makes us the humans we are, Churchland writes.

In Conscience, Churchland discusses many other topics, including the role of reinforcement learning in the development of social behavior and the human cortexs far-reaching capacity to learn by experience, to reflect on counterfactual situations, develop models of the world, draw analogies from similar patterns and much more.

Basically, we use the same reward system that allowed our ancestors to survive, and draw on the complexity of our layered cortex to make very complicated decisions in social settings.

Moral norms emerge in the context of social tension, and they are anchored by the biological substrate. Learning social practices relies on the brains system of positive and negative reward, but also on the brains capacity for problem solving, Churchland writes.

After reading Conscience, I had many questions in mind about the role of conscience in AI. Would conscience be an inevitable byproduct of human-level AI? If energy and physical constraints pushed us to develop social norms and conscientious behavior, would there be a similar requirement for AI? Does physical experience and sensory input from the world play a crucial role in the development of intelligence?

Fortunately, I had the chance to discuss these topics with Dr. Churchland after reading Conscience.

What is evident from Dr. Churchlands book (and other research on biological neural networks), physical experience and constraints play an important role in the development of intelligence, and by extension conscience, in humans and animals.

But today, when we speak of artificial intelligence, we mostly talk about software architectures such as artificial neural networks. Todays AI is mostly disembodied lines of code that run on computers and servers and process data obtained by other means. Will physical experience and constraints be a requirement for the development of truly intelligent AI that can also appreciate and adhere to the moral rules and norms of human society?

Its hard to know how flexible behavior can be when the anatomy of the machine is very different from the anatomy of the brain, Dr. Churchland said in our conversation. In the case of biological systems, the reward system, the system for reinforcement learning is absolutely crucial. Feelings of positive and negative reward are essential for organisms to learn about the environment. That may not be true in the case of artificial neural networks. We just dont know.

She also pointed out that we still dont know how brains think. In the event that we were to understand that, we might not need to replicate absolutely every feature of the biological brain in the artificial brain in order to get some of the same behavior, she added.

Churchland reminded that while initially, the AI community largely dismissed neural networks, they eventually turned out to be quite effective when their computational requirements were met. And while current neural networks have limited intelligence in comparison to the human brain, we might be in for surprises in the future.

One of the things we do know at this stage is that mammals with cortex and with reward system and subcortical structures can learn things and generalize without a huge amount of data, she said. At the moment, an artificial neural network might be very good at classifying faces by hopeless at classifying mammals. That could just be a numbers problem.

If youre an engineer and youre trying to get some effect, try all kinds of things. Maybe you do have to have something like emotions and maybe you can build that into your artificial neural network.

One of my takeaways from Conscience was that humans generally align themselves with the social norms of their society, they also challenge them at times. And the unique physical structure of each human brain, the genes we inherit from our parents and the later experiences that we acquire through our lives make for the subtle differences that allow us to come up with new norms and ideas and sometimes defy what was previously established as rule and law.

But one of the much-touted features of AI is its uniform reproducibility. When you create an AI algorithm, you can replicate it countless times and deploy it in as many devices and machines as you want. They will all be identical to the last parametric values of their neural networks. Now, the question is, when all AIs are equal, will they remain static in their social behavior and lack the subtle differences that drive the dynamics of social and behavioral progress in human societies?

Until we have a much richer understanding of how biological brains work, its really hard to answer that question, Churchland said. We know that in order to get a complicated result out of a neural network, the network doesnt have to have wet stuff, it doesnt have to have mitochondria and ribosomes and proteins and membranes. How much else does it not have to have? We dont know.

Without data, youre just another person with an opinion, and I have no data that would tell me that youve got to mimic certain specific circuitry in the reinforcement learning system in order to have an intelligent network.

Engineers will try and see what works.

We have yet to learn much about human conscience, and even more about if and how it would apply to highly intelligent machines. We do not know precisely what the brain does as it learns to balance in a headstand. But over time, we get the hang of it, Churchland writes in Conscience. To an even greater degree, we do not know what the brain does as it learns to find balance in a socially complicated world.

But as we continue to observe and learn the secrets of the brain, hopefully we will be better equipped to create AI that serves the good of all humanity.

Link:
Will artificial intelligence have a conscience? - TechTalks

Tackling the artificial intelligence IP conundrum – TechHQ

Artificial intelligence has become a general-purpose technology. Not confined to futuristic applications such as self-driving vehicles, it powers the apps we use daily, from navigation with Google Maps to check deposits from our mobile banking app. It even manages the spam filters in our inbox.

These are all-powerful, albeit functional roles. Whats perhaps more exciting is AIs growing potential in sourcing and producing new creations and ideas, from writing news articles to discovering new drugs in some cases, far quicker than teams of human scientists.

With every new iteration in software design, computing power, and ability to leverage large data sets, AIs potential as an initiator of ideas and concepts grow, and this raises questions around its rights to Intellectual Property (IP).

Professor of Law and Health Sciences at the University of Surrey, Dr. Ryan Abbotts work is focused on the meeting of law and technology in particular, the regulation of AI. While Abbott doesnt believe AI should be entitled to its own IP, he believes the time is right to discuss the ability of people to own IP generated autonomously by AI or risk losing out on the technologys full potential.

Right now, we have a system where AI and human activity are treated very differently, Abbott told TechHQ.

Drug discovery is a tangible example of how AI contributes to society. Technology is making the discovery of new drugs faster, cheaper, and more successful. Its been used this way for decades, helping to identify new drug targets or validate drug candidates, and to help design trials in ways that can potentially shorten drug development timeframes, bringing treatments to market faster. But the critical nature of patent protection in life sciences, drug development, in particular, is holding back these advances.

Thats because, when it comes to AI-generated content and ideas, AI tends to be seen by experts and lawmakers as a tool, and not the source of the creation or discovery. In the same way that a paintbrush doesnt get the credit for an oil painting and CAD software isnt credited for the designs of an architect, AI is perceived as a vehicle to an end product. The trouble is, current laws are not consistent and clear cut. In the UK, where a work lacks a traditional human author, the producer of the work is deemed the author. In the US, the inventor is the person who conceives the idea. In either case, neither human may know what the AI system will produce or discover.

While patent rules in life sciences highlight the legal constraints on AI in research and development, these same challenges affect everything from the development of components for cars to spacecraft. The problem will become increasingly apparent as AI continues to improve, and people do not.

The consensus among legal experts is that its not clear whether AI could carry out the understood rights and obligations of an IP owner. IP rights are restricted to natural persons and legal entities such as businesses. The European Union reportedly abandoned plans to introduce a third type of entity electronic personality in 2018 after pressure from 150 experts on AI, robotics, IP, and ethics.

Speaking to Raconteur previously, Julie Barrett-Major, consulting attorney at AA Thornton and member of the Chartered Institute of Patent Attorneys International Liaison Committee, explained: With patent ownership come certain obligations and responsibilities or at least opportunities to exercise these. For example, to enforce the rights awarded, the owner can sue for infringement or at least indicate a willingness to do so to maintain exclusivity.

[] the patent must be renewed at regular intervals, and there are other actions that need to be taken to ensure the rewards are not diluted, such as updating the government registers of patents with details of changes in ownership details, informing of licensees and so forth.

Abbott argues that, ultimately, the limitations of current IP frameworks may force organizations to continue to use people, where a machine might be more efficient.

Last year, Siemens was unable to file for multiple patents on inventions they believed to be patentable because they could not identify a human inventor. The involved engineers stated that the machine did the inventive work. Abbott himself is carrying out a legal test case, filing patents for two inventions made autonomously by AI. Both have been rejected from the US, UK, German, and European patent offices on the basis they failed to disclose a human inventor. The rejections are under appeal, but the idea to help raise dialogue on the issue.

Most of the time today AI is just working as a tool and helping augment people, but machines are getting increasingly autonomous and sophisticated, and increasingly doing the sorts of things that used to make someone an inventor, Abbott said.

The current status quo means that law can get in the way of AI development in certain areas, but not others. That means AIs benefits are not evenly spread across industries. While parents are important to drug development, for example, they are less important when it comes to making software. This imbalance could lead to the emergence of shady IP practices in certain sectors when it comes to using AI. The workaround, says Abbott, is people simply not disclosing AIs role in creating something valuable, whether thats an article, video, or song. Someone can just list themselves as the author and no one is going to question that.

The issue of patents and intellectual property in the fields of academic research, for many of us, might not seem like its worth our consideration. But the broader legal concept Ryan looks to highlight that we should question current standards of AI accountability and ownership affect how AI is being used around us.

Across all areas of the law, we are seeing the phenomenon of artificial intelligence stepping into the shoes of people and doing the sorts of things that people used to do, said Abbott.

Ultimately, for AI to be used to its full potential, there must be an open discussion, public consultation, and debates on the current litigation surrounding AI. Thats now happening. The issue has had recent attention by the World Intellectual Property Organization (WIPO) while the UK Intellectual Property Office just announced a public request for comments on whether the IP system is fit for purpose in light of AI. The US has just completed a similar consultation.

These efforts are a solid start to getting a diverse range of input from stakeholders, said Abbott. In time, legislators should get involved.

Originally posted here:
Tackling the artificial intelligence IP conundrum - TechHQ

Why Artificial Intelligence Should Be on the Menu this Season – FSR magazine

The perfect blend of AI collaboration needs workers to focus on the tasks where they excel.

Faced with the business impacts of one of the largest health crises to date, restaurants of all sizes are in a pivotal moment in time where every decisionshort term and long termcounts. For their businesses to survive, restaurant owners have had to act fast by rethinking operations and introducing pandemic-related initiatives.

Watching the worlds largest chains all the way down to the local mom-and-pops become innovators in such extreme times has shown the industrys tenacity and survival instinct, even when all odds are stacked against their favor. None of these initiatives would be possible without technology as the driving factor.

Why AI is on the Menu This Season

A recent Dragontail Systems survey found that 70 percent of respondents would be more comfortable with delivery if they were able to monitor their orders preparation from start to finish. Consumers want to be at the forefront of their meals creationthey dont want to cook it, but they do want to know it was prepared in a safe environment and delivered hot and fresh to their door.

Aside from AIs role on the back-end helping with preparation time estimation and driver scheduling, the technology is now being used in cameras, for example, which share real-time images with consumers so that they can be sure their orders are handled with care. Amid the pandemic, this means making sure that gloves and masks are used during the preparation process and that workspaces are properly sanitized.

It is clear that AI is already radically altering how work gets done in and out of the kitchen. Fearmongers often tout AIs ability to automate processes and make better decisions in faster time compared to humans, but restaurants that deploy it mainly to displace employees will see only short-term productivity gains.

The perfect blend of AI collaboration needs workers to focus on the tasks where they excel, like customer service, so that the human element of the experience is never lost, only augmented.

AI on the Back-End

Ask any store or shift manager how they feel about workforce scheduling, and almost none will say its their favorite part of the job. Its a Catch-22: even when its done, its never perfect. However, when AI is in charge, everything looks different.

Parameters such as roles in the restaurants, peak days and hours, special events such as a Presidential debate, overtime, seniority, skills, days-off and more can be easily tracked. Managers are not only saving time in handing off this daunting task, but also allowing the best decisions to be made for optimal restaurant efficiency.

Another aspect is order prioritizationby nature, most kitchens and restaurants prepare meals based on FIFO (first-in-first-out). When using AI that enhances kitchen prioritization, for example, cooks are informed when to cook an order, ensuring that there are actually drivers available to deliver it to the customer in a timely manner.

Delivery management then allows drivers to make more deliveries per hour just by following the systems decisions, which improve and optimize the dispatching functionality.

The Birth of the Pandemic Intelligent Kitchen/Store

With the pandemic, our awareness of sanitation and cleanliness went dramatically up and the demand for solutions came with it. AI cameras give customers exactly thata real-time, never-before-seen view inside the kitchen to monitor how their order is being prepped, managed, and delivered.

Another aspect where AI comes in handy is avoiding dine-in and doing more take-out and drive-thru. When a customer is making an order online and picking the order up in their car, an AI camera can detect the car plate number in addition to the customer location (phone GPS) when entering the drive-thru area to provide a faster service with a runner from the restaurant.

In addition, the new concept of contactless menus where the whole menu is online with a quick scan of a QR code is another element building popularity during the pandemic. The benefits go beyond minimizing contact with physical menus; when a restaurant implements a smart online menu, they can collect data and offer personalized suggestions based on customers favorite foods, food/drink combos, weather-based food recommendations, upsell, cross-sell personalized etc.all powered by AI.

Restaurants can no Longer Afford Aversion to Technology

Challenges associated with technology, including implementation and a long-roadmap, are fading awaymost technology providers are offering Plug & Play products or services, and most of them are working on a SaaS model. This means theres no commitment, they are easy to use, and integrate seamlessly with the POS.

Restaurants dont have to make a big investment to reap the benefits technology bringstaking little steps that slowly improve restaurant operations and customer experience can still lead to increased growth and higher profit margins, especially during the pandemic when money is tight.

Technology enhances the experience, giving consumers a reason to keep ordering from their favorite places at a time when the stakes have never been so high, and the competition has never been as fierce. The pandemic is far from over but the changes we are seeing will be here for a lifetime. Thats why it is so important to leverage technology and AI now in order to see improvements in customer satisfaction and restaurant efficiency in the long term.

See the article here:
Why Artificial Intelligence Should Be on the Menu this Season - FSR magazine

Machine learning with less than one example – TechTalks

Less-than-one-shot learning enables machine learning algorithms to classify N labels with less than N training examples.

This article is part of ourreviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

If I told you to imagine something between a horse and a birdsay, a flying horsewould you need to see a concrete example? Such a creature does not exist, but nothing prevents us from using our imagination to create one: the Pegasus.

The human mind has all kinds of mechanisms to create new concepts by combining abstract and concrete knowledge it has of the real world. We can imagine existing things that we might have never seen (a horse with a long necka giraffe), as well as things that do not exist in real life (a winged serpent that breathes firea dragon). This cognitive flexibility allows us to learn new things with few and sometimes no new examples.

In contrast, machine learning and deep learning, the current leading fields of artificial intelligence, are known to require many examples to learn new tasks, even when they are related to things they already know.

Overcoming this challenge has led to a host of research work and innovation in machine learning. And although we are still far from creating artificial intelligence that can replicate the brains capacity for understanding, the progress in the field is remarkable.

For instance, transfer learning is a technique that enables developers to finetune an artificial neural network for a new task without the need for many training examples. Few-shot and one-shot learning enable a machine learning model trained on one task to perform a related task with a single or very few new examples. For instance, if you have an image classifier trained to detect volleyballs and soccer balls, you can use one-shot learning to add basketball to the list of classes it can detect.

A new technique dubbed less-than-one-shot learning (or LO-shot learning), recently developed by AI scientists at the University of Waterloo, takes one-shot learning to the next level. The idea behind LO-shot learning is that to train a machine learning model to detect M classes, you need less than one sample per class. The technique, introduced in a paper published in the arXiv preprocessor, is still in its early stages but shows promise and can be useful in various scenarios where there is not enough data or too many classes.

The LO-shot learning technique proposed by the researchers applies to the k-nearest neighbors machine learning algorithm. K-NN can be used for both classification (determining the category of an input) or regression (predicting the outcome of an input) tasks. But for the sake of this discussion, well still to classification.

As the name implies, k-NN classifies input data by comparing it to its k nearest neighbors (k is an adjustable parameter). Say you want to create a k-NN machine learning model that classifies hand-written digits. First you provide it with a set of labeled images of digits. Then, when you provide the model with a new, unlabeled image, it will determine its class by looking at its nearest neighbors.

For instance, if you set k to 5, the machine learning model will find the five most similar digit photos for each new input. If, say three of them belong to the class 7, it will classify the image as the digit seven.

k-NN is an instance-based machine learning algorithm. As you provide it with more labeled examples of each class, its accuracy improves but its performance degrades, because each new sample adds new comparisons operations.

In their LO-shot learning paper, the researchers showed that you can achieve accurate results with k-NN while providing fewer examples than there are classes. We propose less than one-shot learning (LO-shot learning), a setting where a model must learn N new classes given only M < N examples, less than one example per class, the AI researchers write. At first glance, this appears to be an impossible task, but we both theoretically and empirically demonstrate feasibility.

The classic k-NN algorithm provides hard labels, which means for every input, it provides exactly one class to which it belongs. Soft labels, on the other hand, provide the probability that an input belongs to each of the output classes (e.g., theres a 20% chance its a 2, 70% chance its a 5, and a 10% chance its a 3).

In their work, the AI researchers at the University of Waterloo explored whether they could use soft labels to generalize the capabilities of the k-NN algorithm. The proposition of LO-shot learning is that soft label prototypes should allow the machine learning model to classify N classes with less than N labeled instances.

The technique builds on previous work the researchers had done on soft labels and data distillation. Dataset distillation is a process for producing small synthetic datasets that train models to the same accuracy as training them on the full training set, Ilia Sucholutsky, co-author of the paper, told TechTalks. Before soft labels, dataset distillation was able to represent datasets like MNIST using as few as one example per class. I realized that adding soft labels meant I could actually represent MNIST using less than one example per class.

MNIST is a database of images of handwritten digits often used in training and testing machine learning models. Sucholutsky and his colleague Matthias Schonlau managed to achieve above-90 percent accuracy on MNIST with just five synthetic examples on the convolutional neural network LeNet.

That result really surprised me, and its what got me thinking more broadly about this LO-shot learning setting, Sucholutsky said.

Basically, LO-shot uses soft labels to create new classes by partitioning the space between existing classes.

In the example above, there are two instances to tune the machine learning model (shown with black dots). A classic k-NN algorithm would split the space between the two dots between the two classes. But the soft-label prototype k-NN (SLaPkNN) algorithm, as the OL-shot learning model is called, creates a new space between the two classes (the green area), which represents a new label (think horse with wings). Here we have achieved N classes with N-1 samples.

In the paper, the researchers show that LO-shot learning can be scaled up to detect 3N-2 classes using N labels and even beyond.

In their experiments, Sucholutsky and Schonlau found that with the right configurations for the soft labels, LO-shot machine learning can provide reliable results even when you have noisy data.

I think LO-shot learning can be made to work from other sources of information as wellsimilar to how many zero-shot learning methods dobut soft labels are the most straightforward approach, Sucholutsky said, adding that there are already several methods that can find the right soft labels for LO-shot machine learning.

While the paper displays the power of LO-shot learning with the k-NN classifier, Sucholutsky says the technique applies to other machine learning algorithms as well. The analysis in the paper focuses specifically on k-NN just because its easier to analyze, but it should work for any classification model that can make use of soft labels, Sucholutsky said. The researchers will soon release a more comprehensive paper that shows the application of LO-shot learning to deep learning models.

For instance-based algorithms like k-NN, the efficiency improvement of LO-shot learning is quite large, especially for datasets with a large number of classes, Susholutsky said. More broadly, LO-shot learning is useful in any kind of setting where a classification algorithm is applied to a dataset with a large number of classes, especially if there are few, or no, examples available for some classes. Basically, most settings where zero-shot learning or few-shot learning are useful, LO-shot learning can also be useful.

For instance, a computer vision system that must identify thousands of objects from images and video frames can benefit from this machine learning technique, especially if there are no examples available for some of the objects. Another application would be to tasks that naturally have soft-label information, like natural language processing systems that perform sentiment analysis (e.g., a sentence can be both sad and angry simultaneously).

In their paper, the researchers describe less than one-shot learning as a viable new direction in machine learning research.

We believe that creating a soft-label prototype generation algorithm that specifically optimizes prototypes for LO-shot learning is an important next step in exploring this area, they write.

Soft labels have been explored in several settings before. Whats new here is the extreme setting in which we explore them, Susholutsky said.I think it just wasnt a directly obvious idea that there is another regime hiding between one-shot and zero-shot learning.

Continue reading here:
Machine learning with less than one example - TechTalks

Court Rules Edward Snowden Must Pay More Than $5 Million From Memoir And Speeches – NPR

A federal court is ordering ex-National Security Agency contractor Edward Snowden, seen here in November, to pay more than $5 million in profits and royalties from his 2019 memoir and speeches. Armando Franca/AP hide caption

A federal court is ordering ex-National Security Agency contractor Edward Snowden, seen here in November, to pay more than $5 million in profits and royalties from his 2019 memoir and speeches.

A federal court has ruled that former intelligence contractor Edward Snowden must pay more than $5 million in book royalties and speaking fees derived from his 2019 memoir, the Justice Department said Thursday.

The U.S. District Court for the Eastern District of Virginia entered its final judgment and injunction on Tuesday, siding with the U.S. in a lawsuit dating to the publication of Snowden's 2019 book, Permanent Record.

In its lawsuit, the Justice Department argued that by not submitting the book for a pre-publication review, Snowden had violated nondisclosure agreements he signed while working for the National Security Agency and CIA.

"Edward Snowden violated his legal obligations to the United States, and therefore, his unlawful financial gains must be relinquished to the government," Deputy U.S. Attorney General Jeffrey A. Rosen said in a Justice Department statement.

The statement went on to say that the ruling imposes a constructive trust for current and futures earnings from the book and 56 speeches.

Snowden has resided in Russia since 2013 after being granted asylum there from federal charges stemming from his leak of classified information revealing U.S. surveillance programs. Snowden had worked for the CIA from 2006 to 2009 and as a contractor for the NSA at various times between 2005 and 2013.

The Justice Department's suit was filed the same day as the book's release in September 2019. In addition to Snowden, the suit also named his publisher, Macmillan, and requested the court freeze assets related to the memoir. The department also requested that royalties and profits from the book be put in a trust for the U.S. government.

Shortly after the lawsuit was filed, the same court responsible for Tuesday's finding ruled Snowden breached his obligations to the intelligence agencies, though at the time, the court held off on judgment over the scope of remedies due to the government.

This week's decision is separate from the criminal charges Snowden faces, including espionage and theft of government property.

Read the original post:
Court Rules Edward Snowden Must Pay More Than $5 Million From Memoir And Speeches - NPR

Venezuela To Start Using Cryptocurrency in Global Trade in Efforts To Fend off US Sanctions | Emerging Markets – Bitcoin News

Venezuela president Nicolas Maduro says the country is to start using cryptocurrency in both domestic and global trade, as part of efforts to neutralize crippling U.S. economic sanctions.

Speaking in the countrys parliament on Sept. 29, Maduro revealed that the move will give new strength to the use of petro and other cryptocurrencies, national and global, in domestic and foreign trade

The country has already been trying to use its national crypto, the petro, for this purpose but without much success.

Maduro was delivering an anti-sanctions law aimed at spurring economic and social development, both paralyzed by U.S. sanctions. The blockade has also throttled Venezuelas trade relations with much of the world, where the U.S. dollar still dominates.

Now, the oil-rich South American country has set its sights on virtual currency. Venezuela, the worlds sixth largest oil producer, is hoping to leverage cryptocurrencies to compensate for the squeeze in petrodollars arising from the economic sanctions. Bloomberg quoted Maduro as saying:

The finance minister and Venezuelas central bank have new instruments which we will activate very soon so that everyone can do banking transactions, as well as national and international payments through the central banks accounts. Venezuela is working within the cryptocurrency world.

Excoriated by the West, the leftist Venezuelan leader thundered: Donald Trump and his sanctions are blocking Venezuela from carrying out transactions in any of the worlds banks. Theres other formulas to pay, and its what were using, because our payment system works perfectly in China and Russia.

According to the Bloomberg report, the central bank of Venezuela is formally testing whether it can hold crypto in its reserves. The immediate targets include bitcoin (BTC) and ethereum (ETH).

Both assets have been requested by state-run Petroleos de Venezuela SA. The oil company wants to send BTC and ETH to the central bank and then have it pay the firms suppliers with the coins, says the report.

Venezuelas deepening economic crisis has led to a massive adoption of cryptocurrency, with more than $8 million worth of bitcoin traded peer-to-peer each week, Coindance data shows. The government recently signed a new tax agreement that enabled it to start collecting taxes and fees in the petro.

What do you think about Venezuela turning to crypto in international trade? Let us know in the comments section below.

Image Credits: Shutterstock, Pixabay, Wiki Commons

Read more from the original source:
Venezuela To Start Using Cryptocurrency in Global Trade in Efforts To Fend off US Sanctions | Emerging Markets - Bitcoin News

Blockchain Regulation Is Making Headlines, And That Is Great For Cryptocurrency Development – Forbes

In order to achieve widespread usage as an alternative to fiat options, blockchain and cryptoassets need to be classified and treated as currencies; the recent update from the Office of the Comptroller of the Currency (OCC) is a great move in that direction.

ASSOCIATED PRESS

Blockchain and cryptocurrencies have been part of the financial and economic conversation ever since bitcoin burst into the mainstream during 2017. That said, in order to become an integrated part of the financial system, and to ultimately serve as the basis for an alternative financial system, the blockchain and crypto space need to work with some of the very regulators it was designed to disrupt.

From the very beginning one of the primary issues and obstacles to wider crypto utilization as a medium of exchange and business transactions is the price volatility that correctly or not is associated with this asset class. Stablecoins were designed and developed to address this issue, but even after addressing the price volatility so often associated with cryptocurrencies, there was one fundamental issue that remained unaddressed; the lack of regulatory guidance.

This all changed with the update from the OCC.

Breaking down this information and updated guidance, there are a few key considerations and facts that should be factored into every conversation.

Rules matter. It is almost impossible to overstate just how important and relevant this updated guidance is for the broader blockchain and cryptoasset space. The idea of blockchain and cryptocurrency was to serve as an alternative to existing fiat currencies, but without consistent and understandable guidelines this will remain an idea rather than reality. Putting into place some sort of rules and structure will help to encourage wider adoption of cryptocurrencies, and make doing so simpler for individuals and entrepreneurs.

Establishing frameworks and consistently enforceable rules might not have been the original motivation for blockchain or crypto entrepreneurs, but having these rules in place is essential for the continued development and maturation of the space.

Not all crypto are the same. This might sound redundant, but it is important to remember that not every cryptocurrency or blockchain is the same. Specifically, and especially pertinent to this conversation, is that the recent OCC guidance and update only apply to stablecoins that are backed on a 1:1 basis with existing fiat currencies.

What are stablecoins? Stablecoins are cryptocurrencies that are pegged, tethered, or otherwise supported by some sort of external asset. In the context of actually being used as a medium of exchange, stablecoins seem to represent the most viable path forward. That said, in order for these cryptocurrencies to operate as advertised, there needs be equivalency to current fiat options; the OCC guidance is pointing in that direction.

Collaboration is key. Much has been written about how blockchain and cryptocurrencies will disintermediate the existing financial system, but that only represents a partial perspective of the situation. This recent update from the OCC seems to indicate that, on an increasing basis, cryptocurrencies are becoming part of the incumbent financial system. Working with incumbents might not have been the original idea or goal of cryptocurrency organizations, but it does seem that doing so will be critical to the success of the space.

The notification and update from the OCC might have flown under the collective radar for many accounting and financial professionals, simply because there is so much news that comes at the business landscape on a nearly continuous basis. The OCC is not the only regulator that matters to the blockchain and crypto sector, but being among the first to clarify existing guidance will hopefully encourage other agencies to do the same.

Regulatory guidance and clarifications might not be as scintillating as the latest social media posting or political controversy, but updates such as this are arguably even more important for the blockchain and crypto space. This guidance might have been issued by the OCC, and seem to only pertain to some stablecoins, but it is an important first step in what will hopefully be a much improved regulatory landscape.

Rules and guidelines have a large role to play in the success or failure of any idea, crypto is no exception to this role, and it looks like the rule makers are starting to realize the true potential of this sector.

Go here to see the original:
Blockchain Regulation Is Making Headlines, And That Is Great For Cryptocurrency Development - Forbes