Artificial intelligence and its potential to change healthcare – Chief Healthcare Executive

A panel of physicians and leaders in the field expressed enthusiasm for AIs possible benefits for patients. They also said solutions must be designed with health equity in mind.

Many have hailed the potential of artificial intelligence to transform healthcare.

Michael Howell, Googles chief clinical officer and deputy chief health officer, says, Its hard to imagine a technology that is more hyped than AI.

Even so, Stephen Parodi, executive vice president of The Permanente Federation, says, Widespread AI use in healthcare is still in its infancy.

Still, many are projecting significant growth in the prevalence of AI in medicine in the near future.

During a one-hour forum hosted by The Permanente Federation Monday, healthcare leaders, all physicians, assessed the possibilities of AI, the keys to success, and expectations on its future uses.

Even in a forum where leaders talked about potential challenges, including designing technology with patients in mind and the urgent need to focus on equity, the participants spoke with enthusiasm, even excitement, about the growing role of artificial intelligence in medicine.

Its appropriate to bring some healthy skepticism and ask questions about the potential of artificial intelligence in healthcare, Howell said.

However, Howell also said he expected, AI will do things we didnt think were possible.

Earlier interventions

Edward Lee, executive vice president and chief information officer of The Permanente Federation, talked about how AI is being used across the Kaiser Permanente system.

At Kaiser Permanente, researchers have used AI to examine retinal images of patients with diabetes, to possibly determine if patients are more likely to lose their vision, Lee said.

In addition, Kaiser Permanente is using AI-powered models to analyze which patients in hospitals may be at higher risk of deteriorating or could require intensive care. "This gives us a chance to intervene before patients get sicker, Lee said.

Hundreds of patients have likely been saved, he said, and thats a conservative estimate.

The system is using AI to analyze emails to make sure they are getting to the right member of the care team. This helps our patients get timely responses to their health concerns, Lee said.

John Halamka, president of Mayo Clinic Platform, said he expected that within the next six quarters, artificial intelligence is going to be brought into the workflow of electronic health records.

The Mayo Clinic has been increasingly using AI in research. Mayo Clinic researchers have been studying the use of artificial intelligence to identify pregnant patients who may be at risk for complications, as well as patients who could have greater likelihood of suffering a stroke.

When asked about when AI would gain greater prevalence, Halamka cited the author William Gibson, who once said, The future is already here, its just not evenly distributed.

I believe the perfect storm for innovation requires technology thats good enough, policy thats enabling and cultural change that creates a sense of urgency, Halamka said.

Patients have greater expectations of healthcare, and that will help expand the use of AI in medicine, panelists said. The cultural demands of our patients will drive us forward, Halamka added.

Google Health is using artificial intelligence to bring better technology to care teams, and also in reaching out to consumers when theyre searching for health information online, steering them to relevant and accurate results and away from misinformation, Howell said. The tech giant is also using AI in community context, he said, such as better projections of flood threats.

Vivian Lee, president of health platforms at Verily, a sister company of Google, talked about the use of AI algorithms to identify patients at higher risk of hypertension, substance use, or a longer hospital stay. She said the goal is getting that information to the clinicians to make that data more actionable.

Artificial intelligence also presents opportunities to engage patients in different ways, and that goes beyond just personalized medicine, Vivian Lee said. With AI, she said the question becomes, How do we move to precision health and precision engagement?

I really believe the advances we are making now will enable us to do personalized care at scale, Vivian Lee said.

During the forum, participants, including the audience weighed in on where AI would have the most potential to improve healthcare. Most said it would be the use of artificial intelligence to predict potential health risks.

I think the thing about risk prediction is it can affect not only individual patients it can affect entire populations, entire communities, Edward Lee said. We can positively contribute to the health of many, many patients.

Focusing on health equity

Even as the panelists touted AIs promise, they also said health systems aiming to use artificial intelligence must focus on closing healthcare disparities.

There is deep evidence that care that isnt equitable just isnt high quality, Howell said.

Everyone should have the opportunity to receive the full benefits of AI We should work systematically to make sure that happens," he said.

Researchers are using artificial intelligence to predict risks in patients, but as Howell noted, the problem is some data is missing when it comes to patients from underrepresented communities. In a sense, disparities can be baked into the data being analyzed.

Vivian Lee shared similar concerns. We need to be attentive to bias and health equity, she said.

Fatima Paruk, chief health officer and senior vice president of Salesforce, said AI could be both an enabler or a barrier. But she said, It leaves me thinking we can deliver more equitable care.

The technology of AI in and of itself is only so useful, Edward Lee said.

Combining with expertise is when you can really make a difference in the lives of the patients, he said.

The panels members said they were hopeful in part because much of the research in AI and the new artificials intelligence are being developed by those in the healthcare industry.

Paruk touted AIs potential, combined with remote patient monitoring, in helping older patients potentially live at home longer. Health systems could eventually use data to get a sense of when those older patients may need more assistance.

That would also be a boon to many in the sandwich generation, who are caring for both their children and aging parents. Theres a huge amount of potential there, she said.

While panel members noted similar predictions about electronic medical records reducing demands on physicians, Paruk and others said AI could reduce burnout among clinicians.

But ultimately, the panel members expressed the most enthusiasm for how artificial intelligence could transform patient care.

Im incredibly hopeful for the future, Paruk said.

Follow this link:
Artificial intelligence and its potential to change healthcare - Chief Healthcare Executive

Chipotle Is Testing More Artificial Intelligence Solutions To Improve Operations – Forbes

Chipotle's Chippy, an autonomous kitchen assistant that integrates culinary traditions with ... [+] artificial intelligence to make tortilla chips, is moving into the next phase of testing and will be integrated in a restaurant next month.

During Chipotles Q2 earnings call in late July, executives made it clear the system needed to refine some of its operational processes as dine-in business returns while off-premise business remains elevated.

In doing so, Chief Restaurant Officer Scott Boatwright touted the companys Project Square One, a game plan focused on employee training to execute orders more efficiently. Today, the company announced its also getting more technology involved.

Chipotle is testing two technologies specifically to streamline operations and reduce frictiona kitchen management system and an advanced location-based platform.

In eight Southern California restaurants, Chipotle is testing PreciTastes kitchen management system that provides demand-based cooking and ingredient preparation forecasts by leveraging artificial intelligence and machine learning. According to Chipotle, the system monitors ingredient levels in real time and notifies employees how much to prep, cook and when to start cooking. The system was created to not only optimize throughput but to also minimize food waste.

The new kitchen management system has alleviated manual tasks for our crew and given restaurant managers the tools they need to make informed in-the-moment decisions, ultimately enabling them to focus on an exceptional culinary and an outstanding guest experience, Chief Technology Officer Curt Garner said in a statement.

This isnt Chipotles first foray into AI. Earlier this year, Chipotle announced a test with Miso Robotics to bring its artificial intelligence-driven Chippy into its Cultivate [innovation] Center to replicate the chains signature tortilla chips. That test is now expanding, with Chippy making its first restaurant debut next month in a Fountain Valley, California, location.

From there, the company will gauge employee and guest feedback before developing a broader rollout plan.

During a recent interview, Garner said the company is looking at everything from internet of things to machine learning to run its restaurants more efficiently and enable crew members to focus on other tasks.

When you see us leaning into this space, it will be a question of are there better tools to help our crews versus removing a task? Those are the kind of things were looking at, Turner said.

The company is also currently testing Radius Networks Flybuy, a contextual restaurant program, at 73 Cleveland-area restaurants designed to identify Chipotle app users intent upon arrival. The location-based technology utilizes real-time data to let customers know their orders are ready, to remind them to scan the Chipotle Rewards QR code at checkout and more. It even alerts customers if theyre in the wrong pick-up location.

The program has yielded positive results so far, according to Chipotle, including improved in-store rewards engagement and delivery efficiencies.

Empowering our restaurants with advanced technologies is critical for operational excellence and better positions our teams for our ambitious growth plans, Boatright said in a statement.

Notably, Chipotle isnt the only chain exploring AI technology to improve operations. White Castle has been testing Miso Technologys Flippy in the back of the house for about two years, for instance, while Jamba has partnered with autonomous food platform Blendid to automate smoothies. Several restaurant chains, including Applebees, IHOP and Tropical Smoothie Cafe, leverage Flybuy.

In fact, a new survey from Capterra found that 76% of restaurants are currently using automation in three or more areas of operation, while 96% of restaurants are using some type of automation tool in the back of the house. As such, the cooking robotics space is expected to grow by over 16% a year through 2028 with an estimated worth of $322 million by 2028.

That said, Chipotles scale, company-owned model and zero-debt balance sheet adds a bit more intrigue to this trend. Chipotle has some latitude to pilot new solutions without franchisee investment or pushback, and any proven return on investment will likely provide a strong case for adoption across an industry still very much struggling with labor shortages.

Further, all of these technologies enhance throughput, a major focus for Chipotle to drive more sales. During the companys Q2 call, for example, CEO Brian Niccol said order fulfilment was in the low 30s on a per-15-minute basis nearly 10 years ago, which adds a full percent on comp sales on the day.

On a 15-minute basis, thats what were going after, he said during the earnings call.

Chipotles announcements today come on the heels of the companys Cultivate Next venture fund launch, created to identify strategically aligned companies for early-stage investments. As part of this $50 million fund, Chipotle has already invested in Hyphen, a foodservice platform that automates kitchen operations, and Meati Foods, a company that provides plant-based proteins.

Chipotle is also leveraging a new scheduling tool, has invested in autonomous delivery company, Nuro, and is testing radio-frequency identification to trace and track ingredients in its restaurants.

In a recent statement, Garner said the company is exploring investments in innovations that will enhance employee and guest experience and quite possibly revolutionize the restaurant industry.

Investing in forward-thinking ventures that are looking to drive meaningful change at scale will help accelerate Chipotles aggressive growth plans, he said.

Chipotle currently has about 3,000 locations, with plans to grow to about 7,000 in the coming years.

See the rest here:
Chipotle Is Testing More Artificial Intelligence Solutions To Improve Operations - Forbes

Are We Heading Towards a Post-Responsibility Era? Artificial Intelligence and the Future of Morality | Practical Ethics – Practical Ethics

By Maximilian Kiener. First published on the Public Ethics Blog

AI, Today and Tomorrow

77% of our electronic devices already use artificial intelligence (AI). By 2025, the global market of AI is estimated to grow to 60 billion US dollars. By 2030, AI may even boost global GDP by 15.7 trillion US dollars. And, at some point thereafter, AI may come to be the last human invention, provided it optimises itself and takes over research and innovation, leading to what some have termed an intelligence explosion. In the grand scheme of things, as Google CEO Sundar Pichai thinks, AI will then have a greater impact on humanity than electricity and fire did.

Some of these latter statements will remain controversial. Yet, it is also clear that AI increasingly outperforms humans in many areas that no machine has ever entered before, including driving cars, diagnosing illnesses, selecting job applicants, and more. Moreover, AI also promises great advantages, such as making transportation safer, optimising health care, and assisting scientific breakthroughs, to mention only a few.

There is, however, a lingering concern. Even the best AI is not perfect, and when things go wrong, e.g. when an autonomous car hits a pedestrian, when Amazons Alexa manipulates a child, or when an algorithm discriminates against certain ethnic groups, we may face a responsibility gap, a situation in which no one is responsible for the harm caused by AI. Responsibility gaps may arise because current AI systems themselves cannot be morally responsible for what they do, and the humans involved may no longer satisfy key conditions of moral responsibility, such as the following three.

AI & The Conditions of Responsibility

First, many scholars argue that a key condition of responsibility is control: one can be responsible for something only if one had meaningful control over it. Yet, AI systems afford very little control to humans. Once in use, AI systems can operate at a speed and level of complexity that make it impossible for humans to intervene. Admittedly, people may be able to decide whether to apply AI in the first place, but once this decision has been made, and justifiably so, there is not much control left. The mere decision toriska bad outcome, if it is itself justified and not negligent or reckless, may not be sufficient for genuine moral responsibility. Another reason for the lack of control is the increasing autonomy of AI. Autonomy here means the ability of AI systems not only to execute tasks independently of immediate human control, but also (via machine learning) to shape the principles and algorithms that govern the operation of these systems; such autonomy significantly disconnects AI from human control and oversight. Lastly, there is also the so-called problem of many hands: a vast number of people are involved in the development and use of AI, and each of them has, at most, only a very marginal degree of control. Hence, insofar as control is required for responsibility, responsibility for the outcome of AI may be lacking.

Second, scholars have argued that responsibility has an epistemic condition: one can be responsible for something only if one could have reasonably foreseen or known what would happen as a result of ones conduct. But again, AI makes it very difficult to meet this condition. The best AI systems tend to be those that are extremely opaque. We may understand what goes into an AI system as its input data, and also what comes out as either a recommendation or action, but often we cannot understand what happens in between. For instance, deep neural networks can base a single decision on over 20 million parameters, e.g. the image recognition model Inception v3 developed by Google, which makes it impossible for humans to examine the decision-making process. In addition, AI systems ways of processing information and making decisions is becoming increasingly different from human reasoning so that even scrutinising all the steps of a systems internal working processes wouldnt necessarily lead to an explanation that seems sensible to a human mind. Finally, AI systems are learning systems and constantly change their algorithms in response to their environment, so that their code is in constant flux, leading to some sort of technological panta rhei. For these reasons, we often cannot understand what an AI will do, why it will do it, and what may happen as a further consequence. And insofar as the epistemic condition of responsibility requires the foreseeability of harm to some degree of specificity, rather than only in very general terms (e.g. that autonomous cars sometimes hit people), meeting the epistemic condition presents a steep challenge too.

Third, some theorists argue that one is responsible for something when it reflects ones quality of will, which could be either ones character, ones judgment, or ones regard for others. On this view, control and foresight may not be strictly necessary, but even then, the use of AI poses problems. When an autonomous car hits a pedestrian, for instance, it may well be that the accident does not reflect the will of any human involved. We can imagine a case in which there is no negligence but just bad luck, so that the accident would not reflect poorly on anyones character, judgment, or regard for others.

Thus, various approaches to responsibility suggest that no one may be morally responsible for the harm caused by AI. But even if this is correct, a further important question remains: why should we care about a responsibility gap in the first place? What would be so bad about a future without, or with significantly diminished, human responsibility?

AI & The Point of Responsibility

To address this question, we need to distinguish between at least two central ideas about responsibility. The first explains responsibility in terms of liability to praise or blame.[1] On some of these views, being responsible for some harm means deserving blame for it. Thus, a responsibility gap would mean that no one could be blamed for the harm caused by AI. But would this be so bad? Of course, people may have the desire to blame and punish someone in the aftermath of harm. In addition, scholars argue that blaming practice can be valuable for us, e.g. by helping us to defend and maintain shared values.[2] Yet, the question remains as to whether, in the various contexts of AI, peoples desire to blame really ought to be satisfied, rather than overcome, and also what value blaming practices ultimately hold in these different contexts. Depending on our answer to these questions, we may conclude that a gap of responsibility in terms of blameworthiness may not be so disadvantageous in some areas, but maybe still of value in others.

The second idea identifies responsibility with answerability, where an answerable person is one who can rightly be asked to provide an explanation of their conduct.[3] Being answerable for something does not imply any liability to blame or praise. It is at most an obligation to explain ones conduct to (certain) others. Blame would be determined by the quality of ones answer, e.g. by whether one has a justification or excuse for causing harm. This approach to responsibility features the idea of an actual or hypothetical conversation, based on mutual respect and equality, where the exchanged answers are something that we owe each other as fellow moral agents, citizens, or friends. Here, the question of a responsibility gap arises in a different way and concerns the loss of a moral conversation. Depending on our view on this matter, we may conclude that losing responsibility as answerability could indeed be a serious concern for our moral and social relations, at least in those contexts where moral conversations are important. But in any case, the value and role of answerability may be quite different from the value and role of blame, and thus addressing the challenge of responsibility gaps requires a nuanced approach too.

[1] Cf. Pereboom, D.Free will, agency, and meaning in life. Oxford University Press, 2014.

[2] Cf. Franklin, C. Valuing Blame. In Coats, J., Tognazzini, N. (Eds.) Blame: Its Nature and Norms, 207-223. Oxford University Press, 2012.

[3] Smith, A. (2015). Responsibility as Answerability. Inquiry 58(2): 99-126.

View post:
Are We Heading Towards a Post-Responsibility Era? Artificial Intelligence and the Future of Morality | Practical Ethics - Practical Ethics

Faroe Islands’ National Gallery Becomes the First to Launch Artificial Intelligence Exhibit – Skift Travel News

The National Gallery of the Faroe Islands launched an exhibition this week containing 40 images of the archipelago developed by artificial intelligence program Midjourney, becoming the first national gallery to feature a fully produced show created by artificial intelligence.

The exhibit, which runs from September 29 through October 30, reveals how prominent artists such as Vincent Van Gogh, Claude Monet and Pablo Picasso might have depicted the landscape of the remote archipelago in the North Atlantic. Visitors to the national gallery will also have the opportunity to create their own images of the Faroe Islands using Midjourney.

When I first heard of (artificial intelligence) and Midjourney and how it is possible to create new pictures just like individual artists might have done, it immediately intrigued me, said Karina Lykke Grand, director of the National Gallery of the Faroe Islands.

It was fascinating to see how by giving prompts, the system can get an idea of how an artist such as Van Gogh or Picasso might have painted the Faroe Islands.

Go here to read the rest:
Faroe Islands' National Gallery Becomes the First to Launch Artificial Intelligence Exhibit - Skift Travel News

A.I. is solving traffic problems to get you where youre going safely – Fortune

I havent met anyone that really loves traffic, says Karina Ricks of the Federal Transit Administration.

Except, possibly, professionals like her who are tasked with reducing it.

Ricks has made her career out of caring about traffic patterns. Before her current role as the associate administrator for research, innovation, and demonstration at the FTA, she was the director of mobility and infrastructure for the City of Pittsburgh in Pennsylvania. She has spent countless hours thinking about cars, public transit, roads, and pedestriansand how to make it all flow more smoothly.

When youre in the peak times for travel, when the system is so full, it only takes a small disruption to cause really big problems, Ricks says. The work is to quickly flag those disruptions and rapidly retool the system to operate around them.

What Ricks aims to optimize affects anyone moving from point A to point B, especially in cities. She explained that congestion is the number one problem when it comes to traffic, and a common occurrence in metropolitan areas. Add to that the number of variables at any given time, including human operators of vehicles and geography, and it results in a mind-boggling puzzle to even attempt to solve.

If there were an easy way to reduce traffic, it would have been actioned in the past 50 years, she said. Instead, she, government organizations, and startups in the space, such as Lyt, are all looking at an immense amount of traffic data availablefrom traffic sensors to ride share data and even bike and scooter data from smartphonesand using it to inform decisions on how to get people to work, home, and the grocery store safely and quickly.

That solution involves artificial intelligence and machine learning.

There are tasks that humans just arent good at that machinery is, and thats recognizing patterns, explains Tim Menard, founder and chief executive officer of Lyt, a software technology platform providing mobility solutions for cities. A.I. is a great technology to use, because youre looking at all parts of the system. You can start feeding it different information, and you can put that into a system that can make operational changes.

Menard started Lyt after studying intelligent transportation systems for more than 13 years. His company uses vehicle data to solve traffic problems, especially when it comes to the efficiency of public transit options. For Menard, the end goal is to make more cities equitable by making public transit reliable, predictable, and faster.

Both Ricks and Menard believe that the way to reduce traffic is to get more people onto public transportation, such as buses, subways, and light rail systems. Public transportation is the safest surface transportation mode, with fewer injuries and fatalities. Its also a speedier way to move a larger number of people.

Ricks explained that most of congestion is caused by low-volume vehicles, ie. single-occupant cars. Those drivers are human; some drive faster, some slower; some change lanes often, others stop abruptly when a traffic light flashes yellow before red. Because humans behave so differently, there is a level of unpredictability in the traffic system. Much of her work aims to make mass transit more enticing for commuters.

Youre reducing the rate of crashes that might occur when youre reducing the number of vehicles that are there, Ricks added.

With that in mind, Menard started looking at the Internet of Things for his cloud platform, pulling data from smartphones, automotive sensors, public transportation logs, and delivery vehicles to understand traffic patterns at various times of the day as well as during special one-off events, such as a sports game at a local stadium. He said that the first hurdle was to operate from a place of known information rather than guessing; in the past, he explained, it took a human looking at a video screen for hours and hours to even begin to make an estimate on next steps.

He launched in San Jose, Calif., where for the past three years, he has collaborated with the city to optimize bus routes by 20%, thereby reducing fuel consumption by 14% and emissions at intersections by 12%. Using a predictive estimated time of arrival at each traffic light, his platform reduced the travel time between bus stops by optimizing bus lanes and traffic lights to ensure buses could move as effectively as possible without disrupting other traffic. He now works in other northern California cities, including additional Bay Area towns and Sacramento, as well as in the Pacific Northwest: Seattle and Portland, Ore.

Menard is also looking at bicycle and pedestrian traffic, something he says is of interest and priority to many transit authorities. He has worked to make bicycling safer by creating dedicated, curbed bike lanes with their own traffic signals synced with those of vehicle traffic to help avoid car-bicycle collisions. For pedestrians, Ricks explained that foot traffic uses sensors and adaptive controls to adjust settings in real time based on needsa moment when the A.I. algorithm and real time data intersect.

Another benefit of A.I. technology for traffic patterns surrounds first responders. Menard employed machine learning to analyze data from emergency vehicles like ambulances and fire trucks to improve speed. He noted that in many urban environments, congestion and traffic patterns prohibit first responders from promptly arriving on scene or to a hospital with a life-or-death situation. In Sacramento, Calif., he tackled this problem.

It was literally night and day better in under 15 minutes, he said of taking a look at amassed data from all the relevant stakeholders in the city. There, he improved the slowest 10% of the emergency vehicles by more than 10 miles per hour, allowing them to arrive 70% faster on any response. Even the performing top 10% of vehicles saw an improvement of 6 miles per hour.

For every single-occupant car that swaps to public transit, there is one less vehicle on the road causing congestion. Menard regularly reminds people that when they are sitting in their car, stuck in traffic, they are surrounded by many other people doing the exact same thing. If they traded to a shared vehiclea high-occupancy mode of transitthey may speed along very quickly.

But its always challenging to inspire commuters to change habits, so the new option needs to be compelling enough to motivate them to adjust the way they operate. What you want in a transit system is to show up now [and] theres a bus ready to get you in a timely fashion, Ricks said. We need to address traffic in order for transit to be that attractive alternative. Theres quite a bit of work to still do.

Excerpt from:
A.I. is solving traffic problems to get you where youre going safely - Fortune

Artificial Intelligence Theory Into Practice (and Into Controversy) | LBBOnline – Little Black Book – LBBonline

Art contests are usually controversial solely because the aesthetics of art are so subjective. However, the latest art controversy arose because of a cutting-edge technical question: is art created by artificial intelligence really "art" at all?

This story is interesting because, despite the sensationalist headlines that artificial intelligence is going to take over all our jobs, the fundamental truth even in the area of creativity is that humans are still needed to determine what image works and to fine tune the end-product (although AI will probably get better at this). The work of an artist or a creative person may change in its technique and technology, but creativity itself will never go away.

Already, people have been racing to redefine art in the context of digital technologies, now even more so within the context of tools like Midjourney and DALL-E 2. This is important because these AI engines are acquiring artistic styles from learning about existing art and then generating artwork that resembles the desired artists style. You can now use these programs to create an ad featuring a yellow unicorn on the beach in the style of Jackson Pollock or a portrait of a brand spokesperson as if it were rendered by Rembrandt.

Are you feeling uncomfortable at the thought of AI being used this broadly? You are not alone. In a recent conversation, my colleagues first reaction to employing AI was that we shouldnt use this because our clients generally pay by rate card if a copywriter or an art director were using AI to work faster or better, we would miss out on a higher fee.

There is also a potential danger to genuineness inherent in using AI technologies. These content generators can be, and have been, used to create and spread fake news or misinformation, relying on the fact that it can be virtually impossible to distinguish real images from fakes. There is a growing scrutiny in the public's eye when sensational or controversial images or videos are presented, with everyone looking for evidence that a deepfake, filter, or Photoshop trick has been used. While brands such as Hulu, State Farm, and ESPN have used these techniques for eye-catching ads, we should always be mindful of the dangers and limits of these technologies. And as we learn how to mitigate these risks, we will also be forming new ways of generating and interacting with creative content.

We shouldnt be afraid of the change and challenges presented by artificial intelligence. Yes, the advent of AI in the creative space means we probably need to change the way we approach many issues, from the concept of genuineness to the way we are compensated by our clients for work that can be performed faster and more efficiently. However, limiting the way we embrace technology because of our reliance on an outdated business model is not a win-win in any situation. If we are going to see our business transformed by AI technologies and it will happen then from our client meetings all the way to the judging tent at the state fair, we need to understand that our human creativity is still front and centre, and is still the premium we offer our clients. Through AI, it will just gain a little help.

---

More here:
Artificial Intelligence Theory Into Practice (and Into Controversy) | LBBOnline - Little Black Book - LBBonline

The Coming Menace of Artificial Intelligence And How We Can Respond As Artists – Fstoppers

The future of art has arrived. And it isnt pretty.

Like, I suspect, many people reading this article, I enjoy the late night comedy shows. When I was a kid, late night comedy was rather dry, filled with stale jokes designed to appeal to the broadest possible demographic. But as weve reached the age of peak T.V., so too have we reached the age of peak late night T.V. In our stratified media landscape, where we no longer turn in unison to the same broadcast and news outlets and the idea of a common knowledge base is quickly fading, late night comedians have also taken on the somewhat unfair mantle of often being a source for breaking news. I shouldnt have to watch John Olivers monologue to get a deep dive into real issues affecting the world today. But, with the major networks taking a USA Today approach to most stories these days (only the minimum amount of ink required), late night comedy is often the only place devoting any amount of time to the pressing issues of the day.

This is not meant as an advertisement for late night comedy or a rant against modern news. Rather, it is meant to explain why I found myself watching a story on Last Week Tonight with John Oliver last week about a new service called Midjourney, which allows people to use artificial intelligence to create digital artwork by typing in a series of keywords. While the user themselves isnt doing any of the drawing or painting, the words they choose to enter result in the computer generating their best approximation of the users intent. Predictably, the results range from sublime to awful. And the segment is played mostly for laughs. But the fact that something like this not only could exist, but already does exist, should raise an artist's hackles regardless of discipline.

Only a day or two after watching that segment, I saw a news story about an A.I. rapper that lost its record deal. Yes, you read that right. And I have so many questions. One, how did a rapper that literally doesnt exist get an actual record deal? Two, how did they teach the A.I. to rap? Three, who is behind the A.I. rapper? As it turns out, its that last part that led the rappers contract to be rescinded and leads to more pressing questions of why it was offered a contract in the first place. As it turns out, the digital gangster rapper, decked out in every stereotype of what certain people seem to think of African-Americans in general, was the brainchild of two non-African Americans. Im not going to go into the entire history of blackface and minstrel shows in this essay. That is a topic for its own essay, book, and/or documentary series. But, lets just agree that a digital minstrel show is no better than a live one.

Of course, the echos of history didnt stop there.The voice of the A.I. avatar who secured the record deal was actually that of a real rapper. But, much like many a shady music producer throughout the history of the industry, the producers had manipulated the situation to take the talent from the human artist, convert it into more easily manipulated ones and zeros, capitalize financially on the artists work, then cut them out of the financial profits all together.

As more facts about this A.I. rapper came to light and more outrage began to build, the record company canceled the contract and scrapped their plans. But why could they not have seen the problem with the plan upfront?

And, stepping away from any cultural appropriation issues for a moment, what implications does that have for the music industry going forward? We already live in a world of autotune which can make almost any average shower singer into a songbird. As someone who is musically challenged, I appreciate the idea that a computer can make me sound like Marvin Gaye. But does that mean that I can actually sing? Id say no. But what the story of the canceled rapper tells us is that weve rapidly reached a point where the human being might not be needed at all. Just plug in a handful of appropriate hashtags and voice samples and voila! Youve got the next Whitney Houston. And, better yet, you dont have to pay her a cent.

The same thing is coming to the visual arts as well. I was watching Trevor Noahs show last night (yes, another late night comedian), and he had a story about an A.I. artist who won an art contest. The winning image was 90 percent created by artificial intelligence, according to the artist. He submitted the original keywords and did some finishing work in Photoshop. But the bulk of the heavy lifting was done by an algorithm. Without commenting at all on the quality of the result, what larger questions does that pose? Is the submitter really the artist behind the image at all if he only contributed 10% of the work? He did come up with the original set of keywords to generate the image. So, its fair to say the image wouldnt exist without him. But all the intricate brushwork, lighting, and composition that can take a human being decades to perfect were all done by the computer. I dont know if his winning image would technically be termed a painting or a photograph. Its very photorealistic. But, even if its technically a painting, the implications for photographers are obvious.

Why would a client pay you thousands of dollars in creative fees and licensing to take a picture of their products when they can have a computer generate an image for them? True, it probably wont be totally free. No doubt, a cottage industry will sprout up around A.I. images similar to creating VFX for film and television, and there will be new leaders in the emerging field demanding higher rates. Thats how capitalism works. But does that mean that we are on a precipice of seeing our entire professions replaced by machines?

Lets take the example of a product photographer, for instance. Lets say Coca Cola needs to make a new ad to promote the introduction of their new flavor of Coke. The ad is going to be a spinning soda can, with music and text. It is entirely possible that it would be far more cost effective for Coca Cola to type the dimensions and hex values of its can into a computer and have it generate the photorealistic imagery of the spinning can than it would be to hire a full video crew. As mentioned earlier, they can even turn to an A.I. musical artist to generate a jingle for the spot that wont require them to pay royalties.

As someone whose work generally centers around human beings rather than products, I could easily delude myself into thinking the computer couldnt possibly do what I do. But that would clearly be presumptuous on my part. Assuming that others share my same aversion for ones and zeros. I mean, honestly, even if youre a fan of Marvel movies, what are they really if not just 2.5-hour tributes to special effects. And, based on ticket sales, people dont seem to mind.

Weve been debating ethical issues surrounding digitally manipulating human performances since back when Ridley Scott was forced to use a CGI version of actor Oliver Reed for parts of the film Gladiator following the actors sudden death three weeks before the end of principal photography. In the ensuing 22 years, the technology has only gotten better to the point where the majority of blockbuster movies these days contain large amounts of avatar stuntmen and women doing feats of strength that defy human ability. I wont even get started on the absolute mediocrity of the modern superhero film that seems to not only utilize A.I. avatars on screen, but apparently uses A.I. algorithms to write the scripts as well. Okay, Im being a little snarky there. But, honestly, are you truly sure that Netflix doesnt have some kind of reverse algorithm that knows exactly what movies you are watching and directly feeds this information into another A.I. machine that is writing scripts designed to hit the broadest common denominator?

So, its not even remotely out of the question that in a society which increasingly views art as mere content, that sheer practicality will dictate that the majority of content we consume within 10 to 15 years, if not sooner, will be created almost entirely in a computer. The question is, as artists, what do we do about it?

The enterprising capitalist may look at that scenario and decide the only way to make money as an artist in the future is to become the puppet master behind the creation of A.I. art. If I were Benjamin Braddocks plastic loving neighbor in 2022, it would probably be A.I. that I would be urging the hapless hero to get into. But as an artist who straddles both the analog and digital generations, the idea of giving up even an inkling of my artistic output to a machine just somehow seems wrong. Thats not to say that it is wrong. We all have our own levels of comfort with how much of our art should be computer generated and how much should be done in camera so to speak. I personally tend to reject computer generated art instinctively, like a body rejecting a blood transfusion. But thats entirely subjective, no doubt affected by my age and personal preferences. And, as CGI gets better and better, the line between computer generated art and the real world continues to blur. So my own red line will undoubtedly shift.

The other day, I was rewatching one of my favorite movies, Braveheart. The film is famous for its massive battle scenes with hoards of opposing armies going toe to toe across sweeping vistas. I didnt know it when I first saw it in the theater, but in order to have that many soldiers in the battle scenes, a large number of the soldiers were created digitally. This is commonplace now, but at the time was revolutionary. They did have a significant number of extras to play soldiers. But they essentially doubled and tripled those extras to fill out the scene. To me, this feels like a good use of digital technology. The filmmakers still accomplished the bulk of the work practically. They simply rounded out the hard work they had already done with these digital soldiers so that they wouldnt break the budget. I never noticed it in the film, and it didnt bother me.

Compare that with the modern superhero movie where not only the actors, but the environment, and half of the props are created whole cloth from digital assets. The actors give performances against green screens (or the new, more immersive virtual LED walls), and everything around them is literally created by a computer. There are still, at this moment, real human beings serving as VFX artists to create those digital worlds. So, its not the same as an A.I. generated environment. The human VFX artists are truly digital gods. But I do find myself emotionally disengaging from the majority of these modern action films simply because its impossible for my mind to divorce from the fact that Im watching one and zeros turned into avatars rather than actual people.

People, as imperfect as we are, have value. In fact, its our flaws that give us that little extra something that allows an audience to relate to a character and see a little something of themselves. Human beings have oddities that make them special. Audrey Hepburn complained about having an extra long neck, but I defy anyone to see her as anything less than beautiful. Humphrey Bogart had a lisp. Jimmy Stewarts awkward and stilting way of talking not only didnt hurt his career, but became one of his trademarks.

A few months ago, I was revisiting John Woos 1997 action film Face/Off. Theres a scene in the film where the two characters, played by Nicolas Cage and John Travolta, are in a speedboat race. At some point, theres a boat crash and the two characters are thrown from the boat, flying high through the air. As many times as Ive seen the film this particular time, it became painfully clear to me that the people flying out of the boat in the wide shot were quite decidedly not Nicolas Cage and John Travolta. This makes sense. This is why the profession of stuntman exists. And its not like, deep in my brain, I didnt already realize that theres no way the two stars would do that stunt themselves. But, in the context of the film, I had suspended disbelief long enough to just go with it. And, you know what? It totally worked.

Nowadays, those stuntmen would have been replaced by digital avatars. They would have been able to superimpose Cage and Travoltas faces onto those avatars and likely created the boats and the flailing bodies with VFX. But, somehow, I seriously doubt it would have felt as exciting. Theres something about seeing a real person, even an imperfect one, in peril. One of the main reasons the new Top Gun: Maverick sequel works so well is because of Tom Cruises dedication to practical stunts. Surely, the movie has its fair share of digital effects. But by keeping those to a minimum, you allow your audience a greater ability to connect to the story. Because what you are seeing is real. Its human. Its relatable.

So, as we continue to hurtle our way towards a world where A.I. is going to start taking away a larger and larger chunk of the creative jobs in which many of us make our living, I suspect that the most powerful tool we will have in our defense is our very imperfection. This sounds counterintuitive, but whats the one thing about us that a computer will never be able to replicate? Our humanity.

If your entire artistic voice is based on your ability to expose perfectly to your light meter, you may find yourself in trouble. Computers can already perform that trick. Just think about all the tasks your camera is already capable of doing for you. But your camera, no matter how many megapixels, cant replicate your artistic voice. Your artistic voice is the culmination of all the experiences that youve had in life, art-related or otherwise. Your artistic voice is the grand sum of your life's emotions. It is when you put yourself, all of yourself, into your work and art that you can create greatness.

There will no doubt be more and more pressure to turn over more and more of your artistic creation to technology as time goes on. And, no doubt, new technologies will come along that will genuinely afford you a chance to be a more effective storyteller and artist. But ask yourself why you wanted to be an artist in the first place. Was it just to get a result as quickly and as cheaply as possible? Was it just as a shortcut to fame and fortune? Or, did you become an artist because you had something to say? Did you look at the great ballad of life and feel that you had a right to contribute a verse?

Theres no question that the machines are coming for us as artists. But trends and technologies can never replace the value of human intuition. Our creativity will sustain us. Our humanity will sustain us. And we, as artists, will survive. No matter what SkyNET says.

Read more:
The Coming Menace of Artificial Intelligence And How We Can Respond As Artists - Fstoppers

New artificial intelligence lab at SMU’s AT&T Center for Virtualization to hunt bias in automated systems – EurekAlert

Intelligent Systems and Bias Examination Lab ISaBEL will pair industry and academic research to equalize impact in automated systems

DALLAS (SMU) Quantifying and minimizing bias in artificial intelligence systems is the goal of a new lab established within SMUs AT&T Center for Virtualization. Pangiam, a global leader in artificial intelligence, is the first industry partner for the Intelligent Systems and Bias Examination Lab (ISaBEL).

ISaBELs mission is to understand how Artificial Intelligence (AI) systems, such as facial recognition algorithms, perform on diverse populations of users.The Lab will examine how existing bias can be mitigated in these systems using the latest research, standards, and other peer reviewed scientific studies.

Algorithms provide instructions for computers to follow in performing certain tasks, and bias can be introduced through such things as incomplete data or reliance on flawed information. As a result, the automated decisions propelled through algorithms that support everything from airport security to judicial sentencing guidelines can inadvertently create disparate impact across certain groups.

ISaBEL will design and execute experiments using a variety of diverse datasets that will quantify AI system performance across demographic groups.

As the lab grows ISaBEL will seek additional industry partners to submit their algorithms for certification. SMUs AT&T Center for Virtualizationis the perfect place to work on these issues with its focus on cross-disciplinary research, education and training, and community outreach, said center directorSuku Nair.

Both artificial intelligence and computer vision, which enables computers to pull information from digital images and videos, are quickly evolving and becoming increasingly accessible and adopted.

How to study and mitigate bias in AI systems is a fast moving area, with pockets of researchers all over the world making important contributions, said John Howard, an AT&T Center research fellow and biometrics expert. Labs like ISaBEL will help ensure these breakthroughs make their way into the products where they can do the most good and also educate the next generation of computer scientists about these important issues.

AI industry leaders know that end users must clearly understand bias measurement and the progress of bias mitigation to build trust among the general public and drive full market adoption.

At Pangiam, we are fundamentally committed to driving the industry forward with impactful efforts such as this, said Pangiam Chief AI Officer and SMU Alumnus, Shaun Moore. Bias mitigation has been a paramount focus for our team since 2018 and we set out to demonstrate publicly our effort toward parity of performance across countries and ethnicities. SMU is the perfect institution for this research.

ISaBEL is currently recruiting graduate and undergraduate students to participate in the labs AI research.Please contact Suku Nair atattcenter@smu.eduif interested.

About SMU

SMUis the nationally ranked global research university in the dynamic city of Dallas.SMUs alumni, faculty and more than 12,000 students in eight degree-granting schools demonstrate an entrepreneurial spirit as they lead change in their professions, communities and the world.

About Pangiam

AtPangiam, we are innovators, problem solvers, technologists and experts. Founded by a team of global trade, travel, aviation and homeland security leaders with a passion for whats possible, Pangiam brings together visionaries from government, technology and commercial sectors to drive change. Through innovation, emerging technologies and the power of data, we solve the operational, facilitation and security challenges facing organizations today to get their world moving.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Continued here:
New artificial intelligence lab at SMU's AT&T Center for Virtualization to hunt bias in automated systems - EurekAlert

Artificial intelligence and augmented reality freshen up men’s fragrance – Packaging Europe

Created by robots for humans is a special edition body spray for young men which has been developed using augmented reality and artificial intelligence in a bid to produce the perfect scent and packaging.

Axe A.I. (LYNX A.I. in the UK), is the result of a specially designed AI applied to analyze 6,000 perfume ingredients with 3.5million potential combinations with the goal of discovering the ideal fragrance. The result combines an aromatic floral blend of sage, artemisia, and mint, refreshed with marine, apple and citrus notes and finished with a woody, ambery, and moss background.

Not only did the brand use AI algorithms to help create the scent but they are doubling down on technology and are using Augmented Reality (AR) to help market it.

Powered by Zappars WebAR technology, all limited edition Lynx/Axe A.I. packs will feature a smartphone scannable QR code that will launch a web page where British rapper Aitch will introduce the product and ask the user to spray an AR can of LYNX/Axe in the air to reveal a code allowing entry in a special competition. Six lucky winners will be invited to a special house memorable party hosted by a hologram of Aitch himself.

Senior brand manager for Lynx at Unilever, Josh Plimmer, said: Lynx has always been at the -cutting-edge of fragrance. The launch of Lynx A.I. which was created by crunching 46 terabytes of data, unlocks the code to smell iconic. This groundbreaking new scent was created in collaboration with Swiss firm Firmenich, which is an expert in fragrance and taste.

Caspar Thykier, CEO and co-founder at Zappar added: The Lynx AI concept and campaign clearly demonstrate how innovative brands are leveraging technology to create and market new products, getting ahead in the new connected pack revolution as an always-on platform and part of their owned-media strategy. AR and AI are driving more engaging product experiences and better connecting young customers with brands.

It is daring, courageous and innovative to combine humans and technology in such an emotional field - the sense of smell. commented Firmenichs chief information officer, Eric Saracchi, We are very excited to launch this game-changing fragrance with Lynx to power up guys daily routines. Over 50 years of fragrance data and knowledge - along with over 46 terabytes of data has led to the creation of Lynx A.I.

The special edition Lynx/Axe A.I. campaign will run through to March 2023.

This article was created in collaboration with AIPIA (the Active and Intelligent Packaging Industry Association). Packaging Europe and AIPIA are joining forces to bring news and commentary about the active and intelligent packaging landscape to a larger audience. To learn more about this partnership, click here.

Originally posted here:
Artificial intelligence and augmented reality freshen up men's fragrance - Packaging Europe

Examining patent applications relating to artificial intelligence (AI) inventions: The Scenarios – GOV.UK

Introduction

1.This document contains a set of scenarios concerning inventions that involve artificial intelligence (AI) or machine learning (ML). It is designed to accompany the guidelines for examining patent applications relating to AI inventions. The guidelines are primarily concerned with the patentability of AI inventions in respect of the excluded matter provisions set out in section 1(2) of the Patents Act 1977.

2.The IPO considers that patents are available for AI inventions in all fields of technology. The scenarios in this document are intended to reflect and illustrate the wide range of diverse technical fields where AI inventions may be found.

3.Each scenario has a very brief description of how its AI invention works and an illustrative example of a patent claim. Each scenario includes a simplified assessment setting out our opinion on how each AI invention would likely be assessed in respect of section 1(2).

4.For the avoidance of doubt, we emphasise that this document is not a source of law. Our opinions on the patentability of the scenarios shall not be binding for any purpose under the Patents Act 1977.

5.The scenarios have been designed to focus on the question of excluded matter only. We have assumed that the claimed inventions are novel and non-obvious. We have also assumed that each scenario is sufficiently disclosed.

6.The assessments of excluded matter we give follow a simplified application of the four-step Aerotel approach. We have omitted detailed consideration of steps 1 and 2 of the Aerotel approach. Specifically: a. At step 1, we have simply assumed each claim is sufficiently clear and that no issues of construction arise. b. At step 2, we have simplified the assessment by stating what we consider is the actual contribution c. At steps 3 & 4, we have simplified the analysis by focussing in the main on the program for a computer exclusion of section 1(2). Unless otherwise indicated, our non-binding opinions are limited to this exclusion.

7.Any comments or questions arising from these scenarios should be addressed to:

Phil ThorpeIntellectual Property OfficeConcept HouseCardiff RoadNewportSouth WalesNP10 8QQ

(Telephone 01633 813745)Email: Phil Thorpe

Nigel HanleyIntellectual Property OfficeConcept HouseCardiff RoadNewportSouth WalesNP10 8QQ

Telephone 01633 814746Email: Nigel Hanley

Background This invention concerns a parking management system located in a car parking facility equipped with camera surveillance.

Images from the systems cameras are processed by a first neural network which is trained to detect a vehicle approaching an entrance of the facility. When the first neural network detects an approaching vehicle in an image, the image is passed to a second neural network to implement an Automatic Number Plate Recognition System (ANPR) system. The second neural network is trained to identify a specific number plate region in the image. A recognition module receives the identified number plate region and applies an optical character recognition algorithm to determine the registration number of the vehicle.

A number plate recognition system comprising:

an image capturing device positioned at an entrance to a parking facility;

a computing device for receiving images from the image capturing device and comprising:

a first neural network configured to detect a vehicle in a captured image;

a second neural network configured to receive an indication of the vehicle from the first neural network, detect the presence of a number plate in the image, and determine a region of interest in which the number plate is located; and

a recognition module configured to receive the region of interest and apply an optical character recognition process to the region of interest to determine characters of a registration number of the vehicle.

The contribution

Beyond a conventional surveillance system having a camera and a computer, the contribution made by the invention is:

a number plate recognition system using a first neural network to detect a vehicle in a captured image; a second neural network to detect the presence of a number plate in the image and to determine a region of interest in which the number plate is located, and a module to apply an optical character recognition process on the region of interest to determine the characters of the vehicles registration number.

The contribution is a number plate recognition system which is not excluded by section 1(2). Although the number plate recognition system is computer implemented, it is more than a program for a computer as such because it is carrying out a technical process external to a computer. The number plate recognition system includes a combination of two neural networks and a recognition module that specifically perform image processing operations which are technical in nature (see Vicom). The system has a technical effect in the sense of signpost (i). It self-evidently solves a technical problem relating to the recognition of vehicle registration plates, so signpost (v) would also point to allowability.

The claimed invention is not excluded.

Gas supply systems are complex systems that are monitored by multiple sensors located in the supply system and in its operating environment. Typically, data from sensors can be combined and analysed by an operator to provide the operator with an overview of the operational state, both of the individual components within the system and of the system as a whole. This may assist the operator in identifying faults within the system and options for reconfiguring the system.

However, it is acknowledged that this approach requires specialised skills on the part of the operator and is prone to error, especially when data from large numbers of interconnected sensors must be considered. In particular, understanding the interdependency of changes made to the system is challenging.

This problem has been recognised by the inventor, who has developed an artificial intelligence system to receive and categorise sensor data relating to a gas supply system, identify faults, and recommend system configuration changes to resolve the faults. In making recommendations the AI system will analyse the effect a configuration change may have on the system. The system may implement a recommended configurational change to the system using an automatic operational controller.

A computer-implemented method of managing the operating state of a gas supply system using sensors within the gas supply system and in its operating environment, and characterised in that the method comprises an AI system:

receiving and analysing data from the sensors;

identifying fault conditions within the gas supply system based on the analysis;

and

reporting the fault conditions and generating a recommended solution to an automated operational controller of the gas supply system.

The contribution

The contribution is managing the state of a gas supply system using an AI system that identifies fault conditions in the gas supply system, based on sensor data relating to the operation of the gas supply system, and reports the fault conditions and recommended solutions to an automated operational controller.

The contribution does not fall solely within the computer program exclusion. The contribution made by the invention is a solution to a technical problem lying external to the computer on which the AI system runs, namely the monitoring of the operation of an external technical system (a gas supply system) for fault conditions. This is a technical contribution. Signposts (i) and (v) point to allowability.

The invention defined in the claim is not excluded under section 1(2).

Analysing the motion of an object can be used to identify an activity. In some known examples, such as sporting events, analysing motion of an object can be useful for coaching. Alternatively, in gesture-based systems, a determined gesture can be used as a control mechanism or to issue an alarm. In one known example, a smoking cessation system generates an alarm in a wrist-worn device to deter the user from smoking.

Typically, these known systems operate by comparing real-time data to statistical models to determine motion, and they are heavily reliant on the accuracy of their statistical model. Consequently, systems that rely on the statistical models can be inaccurate.

The inventor has proposed a system that uses motion vectors derived from acceleration, velocity, and orientation in X, Y and Z directions as an input to a neural network to classify the motion. The system functions by receiving motion data in real time from a device such as a sports watch, or other motion sensors. The neural network processes the motion vector using a classification library to classify the motion to a particular movement.

A computer-implemented device for analysing motion comprising:

a controller having a data interface, a neural network, and a movement classification library;

sensors including a gyroscope, a magnetometer, and an accelerometer, wherein data from each sensor each is output to the controller via the data interface;

characterised in that the controller is operable to:

determine a motion vector from the received data; and

provide the determined motion vector to the neural network, wherein the neural network is configured to classify the motion vector as one of a particular movement in the classification library.

The contribution

The contribution is a device that determines a motion vector from data captured by its sensors (gyroscope, magnetometer, accelerometer) and uses a neural network and classification library to classify the motion vector as a movement from the library.

The contribution is not solely a program for a computer since its task is to perform a process of classifying measured sensor data describing the physical motion of a computing device. This process is a technical process lying outside the computing device and is carried out by technical means. It concerns the classification of real-world sensor data as a determined movement. Signpost (i) would point to patentability.

The invention defined in the claim is not excluded under section 1(2).

Cavitation in a pumping system is the formation of vapour bubbles in the inlet flow region of the pump, which can cause accelerated wear, and mechanical damage to pump seals, bearings and other pump components, mechanical couplings, gear trains, and motor components.

A pump system has a measuring apparatus adapted to measure pump flow and pressure data associated with the pump system. A classifier system detects pump cavitation according to the flow and pressure data. The classifier system comprises a neural network which is trained using back propagation. The measuring devices comprise sensors (1,2) for measuring input pressure and output pressure associated with an inlet (3) and an outlet (4), respectively, of the pump system. The flow through the pump is also measured.

1.A method of training a neural network classifier system to detect cavitation in a pump system, the method including the steps of:

correlating each of a plurality of measured pump flow and pressure data pairs with one of a plurality of class values, thereby producing a training data set, wherein each of the plurality class values is indicative of an extent of cavitation within the pumping system and at least one of the plurality of class values is indicative of no cavitation in the pump system; and

training the neural network classifier system using the training data set and back propagation.

2.A method for detecting cavitation in a pump system comprising:

measuring pump flow and pressure data;

detecting pump cavitation according to said flow and pressure data;

wherein the detection step includes providing said flow and pressure data as inputs to a classifier system using a trained neural network, wherein the neural network provides a signal indicative of the existence and extent of cavitation in the pump system, and updating said signal (6) during operation of said pump system.

The contribution

Beginning with claim 1, the contribution it makes is a computer-implemented method of training (i.e. setting up) a neural network classifier so it can detect cavitation in a pump system, where the method uses back propagation with a set of training data comprising measurements of pump flow and pressure from the pump system that are each correlated with a value indicating a corresponding extent of pump cavitation in the system.

Although the contribution relies on a computer program, it is more than a computer program as such. The contribution relates to a process of using physical data to train a classifier for a specific technical purpose, namely the detection of cavitation in a pump system. This is technical in nature. There is a technical contribution in the sense of signpost (i).

Claim 2 also reveals a technical contribution since it relates to the use of a trained classifier for the specific technical purpose of detecting cavitation in the pump system. A technical process lying outside the computer in the sense of signpost (i) is performed.

The invention defined in claims 1 and 2 is not excluded under section 1(2).

Many cars are fitted with catalytic converters to reduce the amounts of gases such as NOx and CO in their exhaust fumes. A problem for such converters is that their operational efficiency changes with the ratio of fuel-to-air in the combustion chambers of the engine. The fuel-to-air ratio must therefore be controlled to be maintained at a fixed value to maintain the efficient operation of the catalytic converter.

It is known to control the amount of fuel injected into an engines combustion chamber using feed forward control in relation to throttle position and additional feedback control in relation to an oxygen sensor (or air/fuel sensor) provided in the exhaust. Although this works well, it can be difficult to control the fuel-to-air ratio correctly when the engine is accelerating or decelerating.

The inventor has developed an injector control system which uses a trained neural network to determine an amount by which a given fuel injection amount should be adjusted during acceleration/deceleration to maintain a correct fuel-to-air ratio and thus maintain catalytic converteor efficiency. The neural network receives data inputs relating to the operational state of the engine, such as engine speed (RPM), intake air pressure, throttle position, fuel injection amount, air intake temperature, engine coolant temperature, and data from an exhaust gas sensor. The neural network outputs a signal indicating a change to the fuel injection amount for controlling the engine.

A computer-implemented neural network for adjusting the amount of fuel injected into a cylinder of a combustion engine, the neural network comprising:

an input layer having:

an input for receiving the RPM of the engine;

an input for receiving intake air pressure of the engine;

an input to receive current throttle position;

an input to receive the present injected fuel amount;

an input to receive air intake temperature;

an input to receive water cooling temperature;

an input to receive exhaust gas sensor data;

at least one hidden layer, wherein the hidden layer is connected to the input layer;

an output layer connected to the at least one hidden layer; and wherein

the output layer has an output indicating an amount by which the fuel injection should be changed.

The contribution

The contribution is a neural network that outputs a control signal relating to an amount by which fuel injection should be changed based on inputs relating to the operational state of the engine, as defined in the claim.

The contribution is a solution to a technical problem lying outside a computer, i.e. maintaining correct fuel-to-air ratio in an engine, so it is more than a program for a computer as such. The neural network takes as its inputs data representing the operating state of the engine and outputs a control signal indicating the amount by which a fuel injection amount should change. The control signal is suitable for controlling a technical process that exists outside of the computer on which the neural network runs. This is a technical contribution. Signposts (i) and (v) apply.

The invention defined in the claim is not excluded under section 1(2).

It is useful to measure the percentage of blood leaving each ventricle of a heart when determining the health of the heart. This measurement can be estimated by a skilled operator of an ultrasound imaging system by imaging a heart and marking out and measuring the boundaries of the ventricles of the heart at either extreme of a heartbeat. However, the accuracy of the operators estimate depends upon the operators skill and judgement.

The inventor has devised a method in which a trained neural network is used to provide a measurement of the percentage of blood ejected by a heart by analysing a series of images of the heart over a heartbeat. The neural network is trained using a supervised learning approach.

A computer-implemented method for determining a percentage of blood ejected from a given heart during a heartbeat, the method comprising:

training a neural network with heart imaging data sets, each set comprising imaging data of a ventricle over time and associated blood ejection percentages, the sets being associated with different hearts;

and using the trained neural network to:

receive a set of imaging data of a ventricle of the given heart;

output a percentage of blood ejection for the given heart.

The contribution

The contribution is a method of estimating a percentage of blood ejected from a heart by training a neural network with heart imaging data which has been labelled with blood ejection percentage, and then obtaining an estimate for the percentage of blood ejected from a given heart over a heartbeat by providing a set of images of that heart (over its heartbeat) to the trained neural network.

The contribution is more than a program for a computer as such because it relates to an improved measurement of the percentage of blood ejected from a heart during a heartbeat. This is a technical measurement of a physical system. This improved measurement is an example of a technical effect upon a process lying outside the computer that implements the invention, following signpost (i). This is a technical contribution.

The invention is not excluded under section 1(2).

Traders on a trading exchange monitor the performance of various stocks and tradeable instruments to try to identify opportunities to make a beneficial trade. It requires specialist knowledge, understanding, and experience to recognise and identify patterns and trends in the market. This means traders will often specialise in a narrow range of instruments e.g. energy shares, financial derivatives, or commodities.

The inventor has recognised that this can result in beneficial trades being overlooked. A trader may either miss a trading opportunity for instruments held as part of their position or a chance to a reduce a loss or increase a profit from a transaction. To assist the trader, the inventor has developed an AI that can identify patterns and correlations between share and instrument prices, identify trades based on recent performance and timing differences, and predict future behaviours. One advantage the AI offers is the opportunity to see connections that would be otherwise opaque and not obvious.

The AI is coupled to an automatic brokerage platform to allow it to execute trades according to profit/loss limits provided by the trader.

A computer-implemented financial instrument trading system comprising an exchange market, a broker terminal, an AI assistant, and an automated brokerage system, characterised in that the AI assistant is configured to:

receive current and historical price data for tradeable financial instruments from the exchanges;

cross reference combinations of financial instruments to identify correlated groups of instruments;

See original here:
Examining patent applications relating to artificial intelligence (AI) inventions: The Scenarios - GOV.UK