5 Uses of Artificial Intelligence in the Contact Center – Customer Think

Artificial intelligence isnt just a science fiction concept anymore. You can find it everywhere, from helping medical teams analyze results to personalized advertisements on social media. It has a ton of benefits for your contact center agents, too, and here are some great ways to use it in your contact center.

When most people think about AI and customer service, they think about chatbots. Many people will use a chatbot before calling through to a contact center. This means that in order to provide successful customer service, chatbots need to be able to handle common questions.

Image source

Luckily, soft AI has advanced enough that most common queries can be resolved automatically. This includes questions like:

At a minimum, if youd put it in your FAQ, your chatbot should be able to answer it. Good chatbots also respond to small talk questions without lying about being human. Many customers will ask are you a robot, so having a prepared answer like yes, but Im pretty smart how can I help? will go a long way. You can also link chatbots into your contact center solutions, allowing potential customers to schedule a callback with a live agent.

By using AI to answer routine questions, you free up your contact center team to deal with more complex cases. This leaves them with more time to provide better service.

Ideally, you want your contact agents to do what theyre best at its customer service best practice to focus on the current customer rather than a myriad of other tasks. However, theres often a lot of additional work they need to do alongside taking calls and responding on social media.

By automating as many routine tasks as possible, you once again free up your agents time to focus on the customer and you might save customers time, too.

One particularly great example of this is voice biometrics. Instead of walking through a lengthy ID confirming process, AI can identify the voice of the user and use it to validate the account. This means the customer can have their identity confirmed by the time they get through to an agent, meaning they can jump straight into the problem.

Its not just the calls that can benefit from automation workflow automation is equally helpful. Currently, a lot of workflow automation requires manual set-up by the agent. This can be done without coding skills many programs are set up to allow less technically inclined people to handle it themselves. Certain trigger events can prompt certain behavior. For example, hanging up the phone might open a blank record entry.

However, its possible for AI to take over some of this. Imagine mentioning in a call that youll email them, only to find your email already open for you, or their file opening up as soon as they say their name. Its important to note that the goal of AI in the contact center isnt to replace your agents they have vital skills like empathy, emotional intelligence, and the ability to connect to callers. Rather, its to enable them to focus on providing excellent customer service.

No-one likes it when they get sent to the wrong place. Customers will often find themselves frustrated when being bounced around different departments, while agents will be left dealing with something outside of their expertise. AI can help improve interactions routing. Thats directing customer contacts more efficiently, both via traditional automatic call distribution (ACD) and a similar system for digital channels.

One of the main players in the AI revolution is Natural Language Processing (NLP). By using this in your IVR or to analyze written messages, it allows customers to say exactly what their issue is rather than relying on stock phrases or suggestions. From here, customers can either be given an automated response (for instance, if theyre asking a routine question like resetting a password), directed to the correct department or escalated to a higher-tier agent if its too complex for the AI to accurately assess.

Image source

This is especially helpful for companies with multiple contact centers. It can ensure that your inbound sales team arent getting tied up with queries for your technical support department, or that questions about recruitment dont get diverted to customer service.

One thing that AI can do which humans struggle with is analyzing large data sets. What might seem a ridiculous undertaking can be easy for a computer.

Lets say you provide online learning courses, and youre trying to work out your workforce scheduling for the next year. Previously, youve found yourself over or understaffed as youve tried to work out the best balance for your contact center.

Even allowing for the usual estimates a spike in calls when exam results come out, and again before the term starts, tracking trends can be difficult. AI could be used to monitor the data sets of the past five years of calls, looking at call length, first-tier resolution rates, and other metrics. It could then provide you with an accurate trend map. This would allow you to schedule your team effectively.

Image source

Other large data sets might include the content of the calls themselves. With NLP, AI picks out certain phrases and their frequency. Using this, you might note that a high percentage of complaint calls are about one specific product. This would allow you to resolve that issue more permanently. You can also track how well your agents are doing with set goals. For instance, if you want them to promote your website, you can see how often they do so.

Of course, its not just post-contact monitoring that AI can do it can also monitor interactions in real-time. Live monitoring is particularly useful for training, allowing corrective actions to occur at the time, rather than in a round-up meeting later in the day. This feedback allows new agents to develop quickly, without letting bad habits set in.

This live monitoring can benefit the contact center as a whole, by allowing you to monitor agents performance and quickly correct any mistakes. Rather than relying on regular reviewing of call logs (even if AI makes that faster!) or communications via other channels to identify problems, they can be noticed immediately and handled as such.

Not only does this help maintain quality, but it also means corrective actions are milder you can let someone know the first time they do it, rather than it building into a major problem of which they might not have been aware.

It can also benefit the individuals, giving them useful insights or pertinent information as needed. Rather than relying solely on a set script or rigid interaction guidelines, real-time monitoring allows your agents more freedom. Then, though, they can get instant feedback on suggested responses or tips based on past data. This means conversations can be more fluid, without losing the tried-and-tested nature of scripts.

All the above uses are ones that can currently be done but the field is still young, and theres a lot more potential. You shouldnt expect AI to replace your customer service agents, but rather, to act as an assistant.

From AI that can accurately predict customer intent (meaning your agents can be one step ahead of their callers) to pre-warning agents about possible problems (imagine picking up a call and an AI warning you that they sound angry!), theres a lot of innovations yet to come.

Continued here:
5 Uses of Artificial Intelligence in the Contact Center - Customer Think

Artificial Intelligence for Medical Evacuation in Great-Power Conflict – War on the Rocks

It is 4:45 a.m. in southern Afghanistan on a hot September day. A roadside improvised explosive device has just gone off and was followed by the call, Medic! Spc. Chazray Clark stepped right on the bomb, losing both of his feet and his left forearm. Clarks fellow soldiers immediately provided medical care, hoping he might survive. After all, the units forward operating base was only 1.5 miles away, and it had a trained medical evacuation (medevac) team waiting to respond to an event of this nature.

A 9-line medevac request was submitted just moments after the explosion occurred, and Clarks commanding officer, Lt. Col. Mike Katona, had been assured that a medevac helicopter was en route to the secured pickup location. Unfortunately, that was not the case; the medevac team was still awaiting orders 34 minutes after the call for help was transmitted.

Although the casualty collection point was secure, the current policy in place required an armed gunship to escort the medevac helicopter, but none were available. It wasnt until 5:24 a.m. that the medevac helicopter started to fly toward the pickup location, but it was too late. Clark arrived at Kandahar Air Field medical center at 5:49 a.m. and was pronounced dead just moments later.

No one knows if Clark would have survived his wounds if he had received advanced surgical care earlier, but most people would agree that his chances of survival would have been much higher. What went wrong? Why wasnt an armed escort available during this dire time? Are the current medevac policies in place outdated? If so, can artificial intelligence improve upon current practices?

With limited resources available, the U.S. military ought to carefully plan how medevac assets will be utilized prior to and during large-scale combat operations. How should resources be positioned now to maximize medevac effectiveness and efficiency? How can ground and air ambulances be dynamically repositioned throughout the course of an operation based on evolving, anticipated locations and intensities for medevac demand (i.e., casualties)? Moreover, how should those decisions be informed by operational restrictions and (natural and enemy-induced) risks to the use of ground and aerial routes as well as evacuation procedures at the casualty collection points? Finally, whenever a medevac request is received, which of the available assets should be dispatched, considering the anticipated future demands of a given region?

The military medevac enterprise is complex. As a result, any automation of location and dispatching decision-making requires accurate data, valid analytical techniques, and the deliberate integration and ethical use of both. Artificial intelligence and, more specifically, machine-learning techniques combined with traditional analytic methods from the field of operations research provide valuable tools to automate and optimize medevac location and dispatching procedures.

The U.S. military utilizes both ground and aerial assets to perform medevac missions. Rotary-wing air ambulances (i.e., HH-60M helicopters) are typically reserved for the most critically sick and/or wounded, for whom speed of evacuation and flexibility for routing directly to highly capable medical treatment facilities are essential to maximizing survivability. Ground ambulances cannot travel as far or as fast as air ambulances, but this limitation is offset by their greater proliferation throughout the force.

Machine Learning to Predict Medevac Demand

More than 4,500 U.S. military medevac requests were transmitted between 2001 and 2014 for casualties occurring in Afghanistan. The location, threat level, and severity of casualty events resulting in requests for medevac influence the demand for medevac assets. Indeed, it is likely that some regions may have higher demand than others, requiring more medevac assets when combat operations commence. A machine-learning model (e.g., neural networks, support vector regression, and/or random forest) can accurately predict demand for each combat region by considering relevant information, such as current mission plans, projected enemy locations, and previous casualty event data.

Effective machine-learning models require historical data that is representative of future events. Historical data for recent medevac operations can be obtained from significant activity reports from previous conflicts and the Medical Evacuation Proponency Division. For example, one study utilizes Operation Iraqi Freedom flight logs obtained from the Medical Evacuation Proponency Division to approximate the number of casualties at a given location to help identify the best allocation(s) of medical assets during steady-state combat operations. Open-source, unclassified data also exist (e.g., International Council on Security and Development, Defense Casualty Analysis System, and Data on Armed Conflict). Although historical data may not exist for every potential future operating environment, it can still be utilized to generalize casualty event characteristics. For example, one study models the spatial distribution of casualty cluster centers based on their proximity to main supply routes and/or rivers, where large populations are present. It utilizes Monte Carlo simulation to synthetically generate realistic data, which, in turn, can be leveraged by machine-learning practitioners to predict future demand.

Demand prediction via a machine-learning model is essential, but it is not enough to optimize medevac procedures. For example, consider a scenario wherein the majority of demand is projected to occur in two combat regions located on opposite sides of the area of operations. If there are not enough medevac resources to provide a timely response for all anticipated medevac demands in both of those regions, where should medevac assets be positioned? Alternatively, consider a scenario wherein one region needs the majority of medevac support at the beginning of an operation, but the anticipated demand shifts to another region (or multiple regions) later. Should assets be positioned to respond to demand from the first region even if it makes it impossible to reposition assets to respond to future demand from the other regions in a timely manner? How do these decisions impact combat operations in the long run?

Optimization Methods to Locate, Dynamically Relocate, and Dispatch Medevac Assets

How do current decisions impact future decisions? The decisions implemented throughout a combat operation are interdependent and should be made in conjunction with each other. More specifically, to create a feasible, realistic plan, it is necessary to make the initial medevac asset positioning decisions while considering the likely decisions to dynamically reposition assets over the duration of an operation. Moreover, every decision should account for total anticipated demand over all combat regions to ensure the limited resources are managed appropriately.

How many possible asset location options are there for a decision-maker to consider? As an example, suppose there are 20 dedicated ground and aerial medevac assets that need to be positioned across six different forward operating bases. Moreover, suppose decisions regarding the repositioning of these assets occur every day for a 14-day combat operation. For any day of the two-week combat operation, any of the 20 assets can be repositioned to one of six operating bases. Without taking into consideration distances, availability, demand constraints, or multiple asset types, the approximate number of options to consider is over 10,000! It is practically impossible for an individual (or even a team of people) to identify the optimal positioning policy without the benefit of insight provided by quantitative analyses.

Whereas a machine-learning model can predict when and where demand is likely to occur, it does not inform decision-makers where to position limited resources. To overcome this, operations research techniques more specifically, the development and analysis of optimization models can efficiently identify an optimal policy for dynamic asset location strategies for the area of operations over the entire planning horizon. The objectives of an optimization model define the quantitatively measured goal that decision-makers seek to maximize and/or minimize. For example, decision-makers may seek to maximize demand coverage, minimize response time, minimize the cost of repositioning assets, and/or maximize safety and security of medevac personnel. The decisions correspond to when, where, and how many of each type of asset is to be positioned across the forward operating bases for the planned combat operation, as well as how assets are dispatched in response to medevac requests. It is necessary to have information about unit capabilities and dispositions to accurately inform an optimization model. This information includes the number, type, and initial positioning of medevac assets as well as the projected demand locations, threat levels, and injury severity levels. An optimization model also considers operational constraints to ensure a feasible solution is generated. These constraints include travel distances and time, fuel capacity, forward operating base capacity, and political considerations.

Medevac assets may need to be dynamically repositioned (i.e., relocated) across different staging facilities, especially as disposition and intensity of demand changes, despite the long-term and strategic nature of combat operations. For example, it may be necessary to reposition assets from forward operating bases near combat regions with lower projected demand to bases near regions with higher projected demand. Moreover, it is important to consider projected threat and severity levels when determining which type of assets to position. For example, it may be beneficial to position armed escorts closer to combat regions with higher projected threat levels. Similarly, air ambulances should be positioned closer to combat regions with higher projected severity levels (i.e., life-threatening events). Inappropriate positioning of assets may result in delayed response times, increased risks, and decreased casualty survivability rates. One way to determine the location of medevac assets is to develop an optimization model that simultaneously considers the following objectives: maximize demand coverage, minimize response time, and minimize the number of relocations subject to force projection, logistical, and resource constraints. Trade-off analysis can be performed by assigning different weights (i.e., importance levels) to each objective considered. Given an optimal layout of medevac assets, another important decision that should be considered is how air ambulances will be dispatched in response to requests for service.

The U.S. military currently utilizes a closest-available dispatching policy to respond to incoming requests for service, which, as the name suggests, tasks the closest-available medevac unit to rapidly evacuate battlefield casualties from point of injury to a nearby trauma facility. In small-scale and/or low-intensity conflicts, this policy may be optimal. Unfortunately, this is not always the case, especially in large-scale, high-intensity conflicts. For example, suppose a non-life-threatening medevac request is submitted and only one air ambulance is available. Moreover, assume high-intensity operations are ongoing and life-threatening medevac requests are expected to occur in the near future. Is it better to task the air ambulance to service the current, non-life-threatening request, or should the air ambulance be reserved for a life-threatening request that is both expected and likely to occur in the near future?

Many researchers have explored scenarios in which the closest-available dispatching policy can be greatly improved upon by leveraging operations research techniques such as Markov decision processes and approximate dynamic programming. Dispatching decision-makers (i.e., dispatching authorities) should take into account a large number of uncertainties when deciding which medevac assets to utilize in response to requests for service. Utilizing approximate dynamic programming, military analysts can model large-scale, realistic scenarios and develop high-quality dispatching policies that take into account inherent uncertainties and important system characteristics. For example, one study shows that dispatching policies based on approximate dynamic programming can improve upon the closest-available dispatching policy by over 30 percent in regards to a lifesaving performance metric based on response time for a notional scenario in Syria.

Ethical Application Requires a Decision-Maker in the Loop

Optimization models may offer valuable insights and actionable policies, but what should decision-makers do when unexpected events occur (e.g., air ambulances become non-mission capable) or new information is obtained (e.g., an unmanned aerial vehicle captures enemy activity in a new location)? It is not enough to create and implement optimization models. Rather, it is necessary to create and deliver a readily understood dashboard that presents information and recommended decisions, the latter of which are informed by both machine learning and operations research techniques. To yield greater value, such a dashboard should allow its users (i.e., decision-makers) to conduct what-if analysis to test, visualize, and understand the results and consequences of different policies for different scenarios. Such a dashboard is not a be-all and end-all tool. Rather, it is a means for humans to effectively leverage information and analyses to make better decisions.

The future of decision-making involves both artificial intelligence and human judgment. Whereas humans lack the power and speed that artificial intelligence can provide for data processing tasks, artificial intelligence lacks the emotional intelligence needed when making tough and ethical decisions. For example, a machine-learning model may be able to diagnose complex combat operations and recommend decisions to improve medevac system performance, but the judgment of a human being is necessary to address intangible criteria that may elude quantification and input as data.

Whereas the effectiveness and efficiency of the U.S. military medevac system has been very successful for recent operations in Afghanistan, Iraq, and Syria, future operating environments may be vastly different from where the United States has been fighting over the past 20 years. Artificial intelligence and operations research techniques can combine to create effective decision-making tools that, in conjunction with human judgment, improve the medevac enterprise for large-scale combat operations, ultimately saving more lives.

The Way Forward

The Air Force Institute of Technology is currently examining a variety of medevac scenarios with different problem features to determine both the viability and benefit of incorporating the aforementioned artificial intelligence and operations research techniques within active medevac operations. Once a viable approach is developed, the next step is to obtain buy-in from senior military leaders. With a parallel, macroscopic-level focus, the Joint Artificial Intelligence Center, the Department of Defenses Artificial Intelligence Center of Excellence, is currently seeking new artificial intelligence initiatives to demonstrate value and spur momentum to accelerate the adoption of artificial intelligence and create a force fit for this era.

Capt. Phillip R. Jenkins, PhD, is an assistant professor of operations research at the Air Force Institute of Technology. His academic research involves problems relating to military defense, such asthe location, allocation, and dispatch of medical evacuation assets in a deployed environment. He is an active-duty Air Force officer with nearly eight years of experience as an operations research analyst.

Brian J. Lunday, PhD, is a professor of operations research at the Air Force Institute of Technology who researches optimal resource location and allocation modeling. He served for 24 years as an active-duty Army officer, both as an operations research analyst and a combat engineer.

Matthew J. Robbins, PhD, is an associate professor of operations research at the Air Force Institute of Technology. His academic research involves the development and application of computational stochastic optimization methods for defense-oriented problems. Robbins served for 20 years as an active-duty Air Force officer, holding a variety of intelligence and operations research analyst positions.

The views expressed in this article are those of the authors and do not reflect the official policy or position of the U.S. Air Force, the Department of Defense, or the U.S. government.

Image: Sgt. 1st Class Thomas Wheeler

Read more here:
Artificial Intelligence for Medical Evacuation in Great-Power Conflict - War on the Rocks

Why Fashion Needs More Imagination When It Comes To Using Artificial Intelligence – Forbes

Virtual Fashion Show created using 3D digital design and AI machine learning algorithms

Until now, the use of artificial intelligence (AI) in the fashion industry has focused mostly on streamlining processes and increasing sales conversion. Areas which have traditionally taken precedence have been: finding efficiencies through automation, detecting product defects and counterfeit goods with image recognition and increasing sales conversion through personalised styling. Creative uses of AI have been underexplored, but pose a mammoth opportunity for an industry rapidly digitising its design and presentation methods during the pandemic and most likely afterwards too. Why is creative AI so underutilised in and what are the nascent opportunities for designers and brands? Is the use of AI in fashion design and presentations inevitable?

Matthew Drinkwater, Head of the Fashion Innovation Agency at London College of Fashion believes that: Initial uses of Artificial Intelligence have focused on quantifiable business needs, which has allowed for start-ups to offer a service to brands. He contests that: Creativity is much more difficult to quantify and therefore more likely to follow behind.

In a practical sense, perhaps an additional limitation has been the gulf between the skillsets of fashion designers and computer scientists. London College of Fashion seems to think so, having recently launched an 8 week AI course for 20 volunteer fashion students to learn Python to write code to gather fashion data, then use it to develop creative fashion solutions and experiences. When asked about the potential of AI in fashion, Drinkwater said: For me, it is in the unpredictability of an algorithm. He acknowledged the creative talent of designers but suggested that the collaboration between creative and neural networks may be where the unexpected is delivered. Its here that he predicts an imperfect result that challenges our perception of what fashion design or showcasing could or should be could arise.

The AI course was developed by the Fashion Innovation Agency (FIA) in partnership with Dr Pinar Yanardag of MIT Media Lab. Working on the course was FIAs 3D Designer, Costas Kazantzis, who also designed 3D environments for one of the course outputsan AI-driven catwalk. He explained during a Zoom call that the students hadnt coded before and were from a wide range of courses, including pattern cutting (for garment construction) and fashion curation. Despite being complete beginners learning Python, When they understood the technical capabilities of AI they were able to thrive, he said.

The AI models used were generative adversarial networks (GANs), a type of machine learning where two adversarial models are trained simultaneouslya generator ("the designer") which learns to create images that look real, and a discriminator ("the design critic") which learns to tell real images apart from fakes. During training, the generator becomes better at creating images that look real, while the discriminator becomes better at detecting fakes. The application of this creatively allows computer-generated imagery and movement that look plausible (and likely aesthetically pleasing) to the viewer.

The students formed teams and devised proof-of-concept showcases of the uses of AI within the fashion industry, as well as being shown how and where to gather appropriate data to train their own algorithms. The course covered a range of AI applications, including training an AI model to classify items of clothing and predict fashion trends from social media, and style transfer to recognise imagery and create new designs. A pivotal output from the course was a virtual fashion show which was created from archive catwalk show footage but was placed in a new 3D environment with the models wearing new 3D-generated outfits. Drinkwater believes this is an example of how even those with limited experience in the field can collaborate to push boundaries.

Talking me through the workflow for the virtual show, Kazantzis explained that computer vision algorithms were used to estimate skeletal movement data from an archive fashion show video. This data was then turned into a 3D pose simulation using another algorithm and applied to a 3D avatar in Blender to replicate the models movement in the original video.

CLO software was used to design and animate the garments for the avatar models, and style transfer (which uses image recognition via convolutional neural networks, or CNNs, to recognise patterns, textures and colours then suggests designs and placement on the garment) was used to develop the textiles and final garment surfaces. The 3D environment for the virtual show was created in gaming engine Unity, which Kazantzis favours for its flexibility in design and diverse outputs, including VR and AR applications. He used particle systems to create atmospheric weather effects including fog and to create sea life, including jellyfish in the underwater environment. The show was brought together in Unity (once the animated garments and textures were imported), creating a final experience ready for export as a VR scene, a website which can be navigated in 360 degrees or as an AR experience in Sketchfab, for example. Its here that the power of AI to develop creative products, environment design and immersive content simultaneously seems most potent.

Katzantzis worked alongside Greta Gandossi, a 2019 graduate of the MA Pattern and Garment Technology course at London College of Fashion (who also holds an architecture degree) and Tracy Bergstrom (who has a data science background). The trio formed a pipeline for extraction of the movement from the archive footage, creation of 3D garments and import into Unity. The students who created this virtual fashion show alongside them were Mary Thrift, Tirosh Yellin and Ashwini Deshpande.

The AI course commenced in March and the proof-of-concept virtual show was completed in June. This seems incredibly swift, and prompted me to ask Matthew Drinkwater whether this type of content creation is affordable and feasible for small and large brands alike? Absolutely, he said, explaining that the project was created with a nominal budget. A caveat? The more GPU's you throw into the mix the more impressive your results are likely to be. Additionally, he recognised that the skill sets required are varied and that these factors would impact the timeframe. Despite this, he said: I would fully expect to see many more examples of AI appearing on the catwalk in seasons to come.

This proof-of-concept virtual show launches today on the fifth day of London Fashion Week, which is operating in a decentralized manner across digital and physical platforms. Most brands are choosing to Livestream a catwalk show happening behind closed doors, or release a conceptual or catwalk-style video online at a specified showtime. Data from Launchmetrics has indicated that engagement generated from these digital show methods has been much lower than for physical fashion shows. Could AI-generated virtual fashion experiences shape the future of fashion shows? Echoing others in the industry, Drinkwater said: It has long been evident that fashion weeks have needed to evolve to provide a much more varied and accessible experience. He went on to add: One fact is undeniable, the increased blurring of our physical and digital lives is going to lead to fashion shows that are markedly different from the traditional runway of the past.

Landmark uses of creative AI include the computer-generated artwork, which sold at Christies in 2018 for $432,500 (almost 45 times higher than the estimate). The artwork Portrait of Edmond Belamy was created by self-taught AI artist Robbie Barrat using a GAN model, working in partnership with Paris-based arts-collective Obvious. Barrat has also worked on an AI-generated Balenciaga runway show and trained a neural network on the past collections of fashion brand Acne Studios to generate designs for their AW20 mens collection. On the consumer and marketing side, there has been an expansion of deep fakes to place consumers into the content of the brands they covet. The RefaceAI app face swaps the user into branded videos, and recently generated more than one million refaces and 400,000 shares in a day during a test collaboration with Gucci.

Mathilde Rougier generative upcycled textile 'tiles'

On the experimental side and seeking to address sustainability through upcycling of waste, fashion design graduate Mathilde Rougier is using convolutional neural networks (CNNs) to design new textiles composed of interlocking offcut fabrics (akin to Lego) to create perpetually new from old fashion products. Her process is explained in detail in a recent Techstyler article and marks a new level of convergence between fashion design, AI and sustainability problem-solving.

Creative AI in fashion is in its infancy but is clearly gaining momentum. With the rapid adoption of 3D digital design in both fashion education and the industry and the ongoing restrictions in physical showcasing, the widespread creative use of AI appears to depend only on a critical mass of use cases to inspire industry adoption. If a group of students with no coding experience can develop this virtual show in just a few months on a nominal budget, the future of the fashion show looks refreshingly unpredictable.

More here:
Why Fashion Needs More Imagination When It Comes To Using Artificial Intelligence - Forbes

This is how AI could feed the world’s hungry while sustaining the planet – World Economic Forum

Artificial Intelligence is transforming the world at a rapid and accelerating pace, offering huge potential, but also posing social and economic challenges. Human beings are naturally fearful of machines this is a constant. Technological advancements tend to outpace cultural shifts. It has taken the shock of a global pandemic to accelerate the uptake of many technologies that have been around for at least a decade.

Unsurprisingly, much of the public discussion on AI has focused on recent controversies around facial recognition, automated decision-making and exam algorithms. Job losses due to automation have further underscored the need for AI systems to become better regulated and more ethical.

But while the risks posed by AI have dominated the headlines, behind the scenes there is a quiet revolution underway as a new crop of startups are developing AI systems to tackle the greatest challenges facing humanity, from climate change to COVID-19.

A new cohort of purpose-driven innovators has entered the AI space, and AI for Good is emerging as one of the most powerful tools to achieve the SDGs and improve lives and livelihoods worldwide.

The agricultural sector employs over 25% of the worlds working population and is responsible for sustaining 7.5 billion people. Despite decades of efforts by governments and industry, more than a quarter of those people a staggering 1.9 billion remain moderately or severely food insecure, and roughly 820 million do not get enough to eat on a daily basis. SDG 2, Zero Hunger, aims to end hunger, achieve food security and improved nutrition, and promote sustainable agriculture.

The challenge is staggering: with global population projected to expand to almost 10 billion by 2050, experts estimate that feeding the planet will require farmers to grow 69% more calories than they did in 2006. At the same time, our existing agricultural systems are already wreaking havoc on the planet, contributing to over 10% of global carbon emissions and using up to half of the worlds habitable land, with devastating consequences for biodiversity, fresh water supplies, and natural ecosystems.

A more intelligent use of land could feed us all and save the planet.

Image: Our World in Data

How can we expand food supply to meet SDG 2 while continuing to make progress on SDGs 6 (Clean Water and Sanitation), 13 (Climate Action), and 15 (Life on Land)? Enter AI, which is beginning to transform virtually every aspect of the agriculture and food system.

Here are 4 ways AI is helping to feed the hungry while saving the planet:

1. Some agricultural AI startups are focused on the field, training powerful algorithms on vast new datasets to improve the efficiency and performance of traditional farms. Tel Aviv based Prospera, for example, collects 50 million data points every day across 4700 fields, analysing them with AI to identify pest and disease outbreaks and uncover new opportunities to increase yields, reduce pollution and eliminate waste.

2. Others are focused on building entirely new approaches to farming from the ground up, enabled by AI technology. Plenty and Aerofarms are pioneering vertical indoor farming, using computer vision and AI algorithms to optimize nutrient inputs and increase yields in real time. Root AI is also using computer vision, but combining it with advanced robots to detect when fruit is ripe and harvest it at its prime.

3. While AI-driven indoor farming is unlikely to feed the whole planet by 2050, the more of it we can do the better: the most advanced, AI-enabled operations are estimated to produce over 20 times more food per acre than traditional fields, using roughly 90% less water.

4. AI is not only being used to improve agricultural productivity, but to take on one of the most environmentally damaging parts of the food sector: industrial meat production. Chile-based NotCo and Brazil-based Fazenda Futuro have both developed AI tools that analyse vast amounts of plant data to identify the best approaches to replicating the taste and texture of meat using only plant-based materials. The market is clearly paying attention: over the last two years sales of refrigerated plant-based meat have grown 125%, and both companies have secured major investments from leading venture capitalists. With meat production accounting for almost 50% of agricultural emissions globally, and a great deal of local pollution, the growing shift to AI-powered plant-based meat alternatives is poised to yield enormous environmental benefits.

Governments too are realizing the value of AI in feeding their citizens and improving their agricultural productivity. To help realize these opportunities, The World Economic Forum recently launched a partnership with the Government of India and the State of Telangana to identify high-value use cases for AI in agriculture, develop innovative AI solutions, and drive their widespread adoption.

Speaking at the projects launch event, Telenganas IT and Industries Minister voiced the increasingly common view amongst government officials: We feel that AI will offer immense possibilities for farmers, governments and all other stakeholders of the ecosystem.

Improving the food system is critically important to achieving several SDGs, but it is just one of many ways that AI is helping to usher in the more equitable, sustainable world that the SDGs envision.

No one technology is going to solve all of the worlds problems, and achieving the SDGs will require ambitious government policies, corporate commitments and individual actions in addition to new technologies. We will need to use every tool at our disposal, and with AI becoming more powerful every day we should encourage more innovators and entrepreneurs to focus on new ways to use this technology to address our biggest societal challenges.

This article was published on Forbes.

Read the original:
This is how AI could feed the world's hungry while sustaining the planet - World Economic Forum

How creative artificial intelligence (AI) and fashion meet – TechHQ

Artificial intelligence (AI) in fashion is no longer a secret and has widely been used to mostly help businesses to streamline processes and increase sales. But the skillsets of fashion designers and computer scientists are miles apart, so its not until recently that the creative applications of AI in this industry have been explored.

Initial uses of artificial intelligence have focused on quantifiable business needs, which has allowed for start-ups to offer a service to brands, Matthew Drinkwater, head of the fashion innovation agency (FIA) at London College of Fashion (LCF), told Forbes. Creativity is much more difficult to quantify and therefore more likely to follow behind.

Seeing the opportunity for AI to play a bigger role in the creative process, LCF has launched an AI course aiming to develop creative fashion solutions and experiences that challenge the current approaches to fashion design.

The 8-week AI and Fashion course, which has already seen 20 fashion students learn Python to write code, was developed by the FIA in partnership with Dr. Pinar Yanardag, a former post-doc at MIT Media Lab and creator of collaborative collective How to Generate (Almost) Anything.

Despite being complete beginners in learning Python, the students were said to be able to thrive once they understood the technical capabilities of AI.

The students were asked to come up with proof-of-concept showcases and uses of AI within the fashion industry, before being shown how to gather appropriate data to train their algorithms to classify items of clothing and predict fashion trends from social media, as well as use style transfer to recognize imagery and create new designs.

By the end of the course, the students created a virtual fashion show using archive LCF show footage that was placed in different 3D environments with models wearing new 3D-generated and animated garments.

The AI models used were generative adversarial networks (GANs). GANs are said to have become the defining look of contemporary AI art and can be used to create photos of imaginary fashion models without the need to hire a model, photographer, or makeup artist.

The technology is a type of machine learning where two adversarial models are trained simultaneously a generator (the designer) and discriminator (the design critic). The basic idea of GANs is that during training, the generator becomes better at creating images that look real, while the discriminator becomes better at detecting images that are not real. This enables the model to learn in an unsupervised manner.

Computer vision algorithms were used to detect skeletal movement from archive video footage, which then turned into a 3D pose simulation that allowed them to animate purely digital avatars in Blender based on the initial footages movement details.

CLO software was then used to design and animate the 3D garments before style transfer allowed additional detail to the clothes to be applied. The virtual fashion show was finally realized in Unity, where the digital models were imported, and FIAs 3D Designer created immersive, animated environments.

The proof-of-concept virtual show was launched at London Fashion Week, with most brands live streaming catwalk shows behind closed doors, or releasing catwalk-style videos online, due to current ongoing social distancing measures.

Engagement numbers for digital shows have shown to be low in comparison to physical fashion shows, however, enthusiasm for AI-generated virtual fashion experiences continue to grow.

It has long been evident that fashion weeks have needed to evolve to provide a much more varied and accessible experience, Drinkwater said, one fact is undeniable, the increased blurring of our physical and digital lives is going to lead to fashion shows that are markedly different from the traditional runway of the past.

Notable uses of creative AI include the creation of computer-generated artwork, Portrait of Edmond Belamy, which was sold for US$432,500 in 2018 and was made by self-taught AI artist Robbie Barrat using a GAN model. Since then, Barrat has gone on to work on an AI-generated Balenciaga runway show and trained neural networks for fashion brand Acne Studios to generate designs for their AW20 mens collection.

The full potential for creative AI in fashion has yet to be discovered, however, with continuous restrictions and physical live events barely possible, realizing the opportunities the technology presents is something brands would do well to capitalize on.

As creative AI continues to find valuable use cases within the fashion industry, the possibility of an exciting facet of fashion emerging is very real.

View post:
How creative artificial intelligence (AI) and fashion meet - TechHQ

Qualtrics Announces Delighted AI, a Machine Learning Engine to Automate Every Step of the Customer Feedback Process – PRNewswire

SALT LAKE CITY, SEATTLE, and PALO ALTO, Calif., Sept. 23, 2020 /PRNewswire/ -- Qualtrics, the leader in customer experience and creator of the experience management category, today announced Delighted AI, an artificial intelligence and machine learning engine built directly into Delighted's customer experience platform. Delighted, a Qualtrics company, developed its AI technology to intelligently automate every aspect of the customer feedback process, from scheduling to analysis and reporting, so that companies can focus on closing feedback loops faster than ever. Delighted AI is complementary to Qualtrics' existing Text iQ enterprise technology for CustomerXM, optimized for Delighted customers.

Today, the most successful customer experience programs are no longer measurement or metrics-based. Over the past few months, Net Promoter Scores have significantly declined in response to COVID-19, exposing customer experience gaps that companies have failed to address or identify. The companies who have emerged as customer experience leaders in the crisis have continuously listened to their customers, and more importantly, responded quickly to their preferences and expectations.

Delighted AI was created based on semantics and themes in the millions of customer feedback responses that Delighted and its customers have analyzed over several years to drive customer experience success.

"Delighted AI helped the right teams at our company understand customer feedback with more precision than ever before, which has been critical in the middle of a pandemic where we need to adapt and respond even more quickly to our customers' needs and expectations," said Roxana Turcanu, Growth Director for Adore Me, a New York-based e-commerce company. "We just recently launched a new try-at-home brand called Outlines, and we were able to do so with the help of Delighted AI by capturing and applying feedback early - this enabled us to pivot, at a rate we've never been able to do, towards what our customers actually wanted from our brand."

Benefits of Delighted AI include:

"Customer experience programs are rapidly evolving as companies have realized that relying on traditional metrics alone does not determine customer success. Instead, the customer experience leaders are winning based on gathering in-the-moment feedback that is immediately actionable and building a culture of continuous listening," said Caleb Elston, co-founder of Delighted. "We created Delighted AI to empower companies to spend less time configuring, implementing, and analyzing so they can focus on acting on insights faster than any other technology or human could before."

Acquired by Qualtrics in 2018, Delighted is one of the fastest and easiest ways to take action on customer feedback, which enables innovative brands and organizations of any size to quickly implement a customer experience program across every channel.

Learn more about Delighted AI here.

About QualtricsQualtrics, the leader in customer experience and creator of the Experience Management (XM) category, is changing the way organizations manage and improve the four core experiences of businesscustomer, employee, product, and brand. Over 11,000 organizations around the world are using Qualtrics to listen, understand, and take action on experience data (X-data)the beliefs, emotions, and intentions that tell you why things are happening, and what to do about it. The Qualtrics XM Platform is a system of action that helps businesses attract customers who stay longer and buy more, engage employees who build a positive culture, develop breakthrough products people love, and build a brand people are passionate about. To learn more, please visit qualtrics.com.

Contact: [emailprotected]

SOURCE Qualtrics

http://www.qualtrics.com

See more here:
Qualtrics Announces Delighted AI, a Machine Learning Engine to Automate Every Step of the Customer Feedback Process - PRNewswire

Impact of Artificial Intelligence on the current education system – Latest Digital Transformation Trends | Cloud News – Wire19

Education can be defined as a process where teachers and students give and receive systematic instructions, respectively. Learning can take place in either a formal or informal setting. More commonly, students receive education in a formal setting such as a high-school, college, university, etc. Education is often considered as a significant determinant of an individuals future success rate. Justifiably, there are various efforts to improve the current education systems in multiple countries worldwide.

Among the many methods employed by various countries to improve the education sector includes the use of AI (Artificial intelligence). AI systems are defined by the use of computers to accomplish tasks that had previously required human intellect. AI utilizes algorithms that collect, classify, organize, and analyze information to conclude it, which is also called machine learning. As such, the use of machine learning has the potential to bring about several benefits for the various industries, including the education system.

Traditional education systems are fast changing to adapt to the technological advancements of todays world. This is especially true with the widespread access to various educational sources of information online. The implementation of educational AI systems has the potential to help students develop their skills and acquire more knowledge in multiple subjects. Therefore, as artificial intelligence continues to evolve, it is our hope that it can help fill the gaps in the education system.

The implementation of AI can improve the efficiency and personalization of learning tasks, as well as streamline administrative tasks. These are benefits enjoyed by students and teachers alike. The implementation of artificial intelligence also helps students to get more time with their respective teachers. This is where unique human qualities are required to supplement where AI would struggle.

AI has altered the students way of learning as they do not need to physically attend classes since they have access to learning material via the internet. As previously mentioned, AI allows educators to spend more time with students by taking over some administrative tasks. However, AI has done so much for education. Below are a few more effects of artificial intelligence on the education industry. They include:

Education should be accessible by everyone regardless of their geographical location. Learning through Artificial intelligence has long been considered as the deciding factor for eliminating geographical boundaries through the facilitation of flexible learning environments globally.

The availability of smart content is a highly debated topic, whereby AI systems can be utilized to offer quality content that is similar to what students buy from some of the best research paper writers online.

AI learning environments can adapt to a students level of skill, mastery of coursework, etc. thus identifying the challenges they face. Accordingly, they provide relevant materials and activities to boost your knowledge base in a specific subject.

You probably already realized that most streaming services offer you a list of shows you are probably going to like, which is an excellent example of AI personalization of your favorite genre of shows. Various other systems can be used in education to cater to the different needs of various students.

Teachers often spend time on administrative duties such as marking exams, reading students assignments, planning the timetable, etc. all of which can be completed by AI systems such as automated assignment processing and grading systems. Thus, teachers get to spend more time with their students.

AI usage, at the very least, reduces the chances of human error delaying specific processes in the learning environment. An excellent example of AI used in school is through the collection of data from various sources and the creation of an accurate forecast to plan for the future effectively.

Besides, the system offers opportunities for international students who either speak different languages or have visual/ hearing defects. For instance, an artificial intelligence system that forms captions in real-time during a presentation. As you can see, the education sector has a lot to gain from the implementation of AI into various systems.

AI systems bring about a world of opportunities to share information globally. Today there are quite a few artificial intelligence systems that help provide a conducive learning environment for all students. The use of AI in learning is quite promising and should be exploited for the benefits it has to offer.

Also read: 9 ways Artificial Intelligence (AI) is impacting education

See original here:
Impact of Artificial Intelligence on the current education system - Latest Digital Transformation Trends | Cloud News - Wire19

At the International Mathematical Olympiad, Artificial Intelligence Prepares to Go for the Gold – Quanta Magazine

The 61st International Mathematical Olympiad, or IMO, begins today. It may go down in history for at least two reasons: Due to the COVID-19 pandemic its the first time the event has been held remotely, and it may also be the last time that artificial intelligence doesnt compete.

Indeed, researchers view the IMO as the ideal proving ground for machines designed to think like humans. If an AI system can excel here, it will have matched an important dimension of human cognition.

The IMO, to me, represents the hardest class of problems that smart people can be taught to solve somewhat reliably, said Daniel Selsam of Microsoft Research. Selsam is a founder of the IMO Grand Challenge, whose goal is to train an AI system to win a gold medal at the worlds premier math competition.

Since 1959, the IMO has brought together the best pre-college math students in the world. On each of the competitions two days, participants have four and a half hours to answer three problems of increasing difficulty. They earn up to seven points per problem, and top scorers take home medals, just like at the Olympic Games. The most decorated IMO participants become legends in the mathematics community. Some have gone on to become superlative research mathematicians.

IMO problems are simple, but only in the sense that they dont require any advanced math even calculus is considered beyond the scope of the competition. Theyre also fiendishly difficult. For example, heres the fifth problem from the 1987 competition in Cuba:

Letnbe an integer greater than or equal to 3. Prove that there is a set ofnpoints in the plane such that the distance between any two points is irrational and each set of three points determines a non-degenerate triangle with rational area.

Like many IMO problems, this one might appear impossible at first.

You read the questions and think, I cant do that, said Kevin Buzzard of Imperial College London, a member of the IMO Grand Challenge team and gold medalist at the 1987 IMO. Theyre extremely hard questions that are accessible to schoolchildren if they put together all the ideas they know in a brilliant way.

Solving IMO problems often requires a flash of insight, a transcendent first step that todays AI finds hard if not impossible.

For example, one of the oldest results in math is Euclids proof from 300 BCE that there are infinitely many prime numbers. It begins with the recognition that you can always find a new prime by multiplying all known primes and adding 1. The proof that follows is simple, but coming up with the opening idea was an act of art.

You cannot get computers to get that idea, said Buzzard. At least, not yet.

The IMO Grand Challenge team is using a software program called Lean, first launched in 2013 by a Microsoft researcher named Leonardo de Moura. Lean is a proof assistant that checks mathematicians work and automates some of the tedious parts of writing a proof.

De Moura and his colleagues want to use Lean as a solver, capable of devising its own proofs of IMO problems. But at the moment, it cannot even understand the concepts involved in some of those problems. If its going to do better, two things need to change.

First, Lean needs to learn more math. The program draws on a library of mathematics called mathlib, which is growing all the time. Today it contains almost everything a math major might know by the end of their second year of college, but with some elementary gaps that matter for the IMO.

The second, bigger challenge is teaching Lean what to do with the knowledge it has. The IMO Grand Challenge team wants to train Lean to approach a mathematical proof the way other AI systems already successfully approach complicated games like chess and Go by following a decision tree until it finds the best move.

If we can get a computer to have that brilliant idea by simply having thousands and thousands of ideas and rejecting all of them until it stumbles on the right one, maybe we can do the IMO Grand Challenge, said Buzzard.

But what are mathematical ideas? Thats surprisingly hard to say. At a high level, a lot of what mathematicians do when they approach a new problem is ineffable.

A key step in many IMO problems is to basically play around with it and look for patterns, said Selsam. Of course, its not obvious how you tell a computer to play around with a problem.

At a low level, math proofs are just a series of very concrete, logical steps. The IMO researchers could try to train Lean by showing it the full details of previous IMO proofs. But at that granular level, individual proofs become too specialized to a given problem.

Theres nothing that works for the next problem, said Selsam.

To help with this, the IMO Grand Challenge team needs mathematicians to write detailed formal proofs of previous IMO problems. The team will then take these proofs and try to distill the techniques, or strategies, that make them work. Then theyll train an AI system to search among those strategies for a winning combination that solves IMO problems its never seen before. The trick, Selsam observes, is that winning in math is much harder than winning even the most complicated board games. In those games, at least you know the rules going in.

Maybe in Go the goal is to find the best move, whereas in math the goal is to find the best game and then to find the best move in that game, he said.

The IMO Grand Challenge is currently a moonshot. If Lean were participating in this years competition, wed probably get a zero, said de Moura.

But the researchers have several benchmarks theyre trying to hit before next years event. They plan to fill in the holes in mathlib so that Lean can understand all of the questions. They also hope to have the detailed formal proofs of dozens of previous IMO problems, which will begin the process of providing Lean with a basic playbook to draw from.

At that point a gold medal may still be far out of reach, but at least Lean could line up for the race.

Right now lots of things are happening, but theres nothing particularly concrete to point to, said Selsam. [Next] year it becomes a real endeavor.

Read the rest here:
At the International Mathematical Olympiad, Artificial Intelligence Prepares to Go for the Gold - Quanta Magazine

UK Information Commissioner’s Office publishes guidance on artificial intelligence and data protection – Lexology

On 30 July, the UK's Information Commissioner's Office ("ICO") published new guidance on artificial intelligence ("AI") and data protection. The ICO is also running a series of webinars to help organisations and businesses to comply with their obligations under data protection law when using AI systems to process personal data. This legal update summarises the main points from the guidance and the AI Accountability and Governance webinar hosted by the ICO on 22 September 2020.

As AI increasingly becomes a part of our everyday lives, businesses worldwide have to navigate the expanding landscape of legal and regulatory obligations associated with the use of AI systems. The ICO guidance recognises that using AI can have undisputable benefits, but that it can also pose risks to the rights and freedoms of individuals. The guidance offers a framework for how businesses can assess and mitigate these risks from a data protection perspective. It also stresses the value of considering data protection at an early stage of AI development, emphasising that mitigation of AI-associated risks should come at the design stage of the AI system.

Although the new guidance is not a statutory code of practice, it represents what the ICO deems to be best practice for data protection-compliant AI solutions and sheds light on how the ICO interprets data protection obligations as they apply to AI. However, the ICO confirmed that businesses might be able use other ways to achieve compliance. The guidance is the result of the ICO consultation on the AI auditing framework which was open for public comments earlier in 2020. It is designed to complement existing AI resources published by the ICO, including the recent Explaining decisions made with AI guidance produced in collaboration with The Alan Turing Institute (for further information on this guidance, please see our alert here) and the Big Data and AI report.

Who is the guidance aimed at and how is the guidance structured?

The guidance can be useful for (i) those undertaking compliance roles within organisations, such as data protection officers, risk managers, general counsel and senior management, and (ii) technology specialists, namely AI developers, data scientists, software developers / engineers and cybersecurity / IT risk managers.

The guidance is split into four sections:

Although the ICO notes that the guidance is written so that each section is accessible for both compliance and technology specialists, the ICO states that sections 1 and 4 are primarily aimed at those in compliance roles, with sections 2 and 3 containing the more technical material.

1. ACCOUNTABILITY AND GOVERNANCE IMPLICATIONS OF AI

The first section of the guidance focuses on the accountability principle, which is one of seven data processing principles in the European General Data Protection Regulation ("GDPR"). The accountability principle requires organisations to be able to demonstrate compliance with data protection laws. Though the ICO acknowledges the ever-increasing technical complexity of AI systems, the guidance highlights that the onus is on organisations to ensure their governance and risk capabilities are proportionate to the organisation's use of AI systems.

The ICO is clear in its message that organisations should not "underestimate the initial and ongoing level of investment and effort that is required" when it comes to demonstrating accountability for use of AI systems when processing personal data. The guidance indicates that senior management should understand and effectively address the risks posed by AI systems, such as through ensuring that appropriate internal structures exist, from policies to personnel, to enable businesses to effectively identify, manage and mitigate those risks.

With respect to AI-specific implications of accountability, the guidance focuses on three areas:

(a) Businesses processing personal data through AI systems should undertake DPIAs:

The ICO has made it clear that a data protection impact assessment ("DPIA") will be required in the vast majority of cases in which an organisation uses an AI system to process personal data because AI systems may involve processing which is likely to result in a high risk to individual's rights and freedoms.

The ICO stresses that DPIAs should not be considered just a box-ticking exercise. A DPIA allows organisations to demonstrate that they are accountable when making decisions with respect to designing or acquiring AI systems. The ICO suggested that organisations might consider having two versions of the DPIA: (i) a detailed internal one which is used by the organisation to help it identify and minimise data protection risk of the project and (ii) an external-facing one which can be shared with individuals whose data is processed by the AI system to help the individuals understand how the AI is making decisions about them.

The DPIA should be considered a living document which gets updated as the AI system evolves (which can be particularly relevant for deep learning AI systems). The guidance notes that where an organisation decides that it does not need to undertake a DPIA with respect to any processing related to an AI system, the organisation will still need to document how it reached such a conclusion.

The guidance provides helpful commentary on a number of considerations which businesses may need to grapple with when conducting a DPIA for AI systems, including guidance on:

The ICO also refers businesses to its general guidance on DPIAs and how to complete them outside the context of AI.

(b) Businesses should consider the data protection roles carried out by different parties in relation to AI systems and put in place appropriate documentation:

The ICO acknowledges that assigning controller / processor roles in respect to AI systems can be inherently complex, given the number of actors involved in the subsequent processing of personal data via the AI system. In this respect, the ICO draws attention to its work on data protection and cloud computing, with revisions to the ICO's Cloud Computing Guidance expected in 2021.

The ICO outlines a number of examples in which organisations take the role of controller / processor with respect to AI systems. The ICO is planning to consult on each of these controller and processor scenarios in the Cloud Computing Guidance review, so organisations can expect further clarity in 2021.

(c) Businesses should put in place documentation for accountability purposes to identify any "trade-offs" when assessing AI-related risks:

The ICO notes that there is a number of "trade-offs" when assessing different AI-related risks. Some common examples of such trade-offs are included in the guidance itself, such as where an organisation wishes to train an AI system capable of producing accurate statistical output on one hand, versus the data minimisation concerns associated with the quantity of personal data required to train such an AI system on the other.

The guidance provides advice to businesses seeking to manage risk associated with such trade-offs. The ICO recommends to put in place effective and accurate documenting processes for accountability purposes, but also for businesses to consider specific instances such as: (i) where an organisation acquires an AI solution and whether the associated trade-offs formed part of the organisation's due diligence processes, (ii) social acceptability concerns associated with certain trade-offs, and (iii) whether mathematical approaches can mitigate trade-off associated privacy risk.

2. ENSURING LAWFULNESS, FAIRNESS AND TRANSPARENCY IN AI SYSTEMS

The second section of the guidance focuses on ensuring lawfulness, fairness and transparency in AI systems and covers three main areas:

(a) Businesses should identify the purpose and an appropriate lawful basis for each processing operation in an AI system:

The guidance makes it clear that organisations must identify the purpose and an appropriate lawful basis for each processing operation in an AI system and specify these in their privacy notice.

It adds that it might be more appropriate to choose different lawful bases for the development and deployment phases of an AI system. For example, while performance of a contract might be an appropriate ground for processing personal data to deploy an AI system (e.g. to provide a quote to a customer before entering into a contract), it is unlikely that relying on this basis would be appropriate to develop an AI system.

The guidance makes it clear that legitimate interests provide the most flexible lawful basis for processing. However, if businesses rely on it, they are taking on an additional responsibility for considering and protecting people's rights and interests and must be able to demonstrate the necessity and proportionality of the processing through a legitimate interests assessment.

The guidance mentions that consent may be an appropriate lawful basis but individuals must have a genuine choice and be able to withdraw the consent as easily as they give it.

It might also be possible to rely on legal obligation as a lawful basis for auditing and testing the AI system if businesses are able to identify the specific legal obligation they are subject to (e.g. under the Equality Act 2010). However, it is unlikely to be appropriate for other uses of that data.

If the AI system processes special category or criminal convictions data, then the organisation will also need to ensure compliance with additional requirements in the GDPR and the Data Protection Act 2018.

(b) Businesses should assess the effectiveness of the AI system in making statistically accurate predictions about individuals:

The guidance notes that organisations should assess the merits of using a particular AI system in light of its effectiveness in making statically accurate and therefore valuable predications. In particular, organisations should monitor the system's precision and sensitivity. Organisations should also prioritise avoiding certain kind of errors based on the severity and nature of the particular risk.

Businesses should agree regular updates (retraining of the AI system) and reviews of statistical accuracy to guard against changing data, for example, if the data originally used to train the AI system is no longer reflective of the current users of the AI systems.

(c) Businesses should address the risks of bias and discrimination in using an AI system:

AI systems may learn from data which may be imbalanced (e.g. because the proportion of different genders in the training data is different than in the population using the AI system) and / or reflect past discrimination (e.g. if in the past, male candidates were invited more often to job interviews) which could lead to producing outputs which have discriminatory effect on individuals. The guidance makes it clear that obligations relating to discrimination under data protection law is separate and additional to organisations' obligations under the Equality Act 2010.

The guidance mentions various approaches developed by computer scientists studying algorithmic fairness which aim to mitigate AI-driven discrimination. For example, in cases of imbalanced training data, it may be possible to balance it out by adding or removing data about under/over-represented subsets of the population. In cases where the training data reflects past discrimination, the data may be manually modified, the learning process could be adapted to reflect this, or the model can be modified after training. However, the guidance warns that in some cases, simply retraining the AI model with a more diverse training set may not be sufficient to mitigate its discriminatory impact and additional steps might need to be taken.

The guidance recommends that businesses put in place policies and good practices to address risks related to bias and discrimination and undertake robust testing of the AI system on an ongoing basis against selected key performance metrics.

3. SECURITY ASSESSMENT AND DATA MINIMISATION IN AI SYSTEMS

The third section of the guidance is aimed at technical specialists and covers two main issues:

(a) Businesses should assess the security risks AI introduces and take steps to manage the risks of privacy attacks on AI systems:

AI systems introduce new kinds of complexity not found in more traditional IT systems. AI systems might also rely heavily on third party code and are often integrated with several other existing IT components. This complexity might make it more difficult to identify and manage security risks. As a result, businesses should ensure that they actively monitor and take into account the state-of-the-art security practices when using personal data in an AI context. Businesses should use these practices to assess AI systems for security risks and ensure that their staff have appropriate skills and knowledge to address these security risks. Businesses should also ensure that their procurement process includes sufficient information sharing between the parties to perform these assessments.

The guidance warns against two kinds of privacy attacks which allow the attacker to infer personal data of the individuals used to train the AI system:

The guidance then suggests some practical technical steps that businesses can take to manage the risks of such privacy attacks.

The guidance also warns against novel risks, such as adversarial examples which allow attackers to feed modified inputs into an AI model that will be misclassified by the AI system. The ICO notes that in some cases this could lead to a risk to the rights and freedom of individuals (e.g. if a facial recognition system is tricked to misclassify an individual for someone else). This would raise issues not only under data protection laws but possibly also under the Network and Information Systems (NIS) Directive.

(b) Business should take steps to minimise personal data when using AI systems and adopt appropriate privacy-enhancing methods:

AI systems generally require large amounts of data but the GDPR data minimisation principle requires business to identify the minimum amount of personal data they need to fulfil their purposes. This can create some tensions but the guidance suggests steps businesses can take to ensure that the personal data used by the AI system is "adequate, relevant and limited".

The guidance recommends that individuals accountable for the risk management and compliance of AI systems are familiar with techniques such as: perturbation (i.e. adding 'noise' to data), using synthetic data, adopting federated learning, using less "human readable" formats, making inferences locally rather than on a central server, using privacy-preserving query approaches, and considering anonymisation and pseudonymisation of the personal data. The guidance goes into some detail for each of these techniques and explains when they might be appropriate.

Importantly, ensuring security and data minimisation in AI systems is not a static process. The ICO suggests that compliance with data protection obligations requires ongoing monitoring of trends and developments in this area and being familiar with and adopting the latest security and privacy-enhancing techniques for AI systems. As a result, any contractual documentation that businesses put in place with service providers should take these privacy concerns into account.

4. INDIVIDUAL RIGHTS IN AI SYSTEMS

The final section of the guidance is aimed at compliance specialists and covers two main areas:

(a) Businesses must comply with individual rights requests in relation to personal data in all stages of the AI lifecycle, including training data, deployment data and data in the model itself:

Under the GDPR, individuals have a number of rights relating to their personal data. The guidance states that these rights apply wherever personal data is used at any of the various stages of the AI lifecycle from training the AI model to deployment.

The guidance is clear that even if the personal data is converted into a form that makes the data potentially much harder to link to a particular individual, this is not necessarily considered sufficient to take the data out of scope of the data protection law because the bar for anonymisation of personal data under the GDPR is high.

If it possible for an organisation to identify an individual in the data, directly or indirectly (e.g. by combining it with other data held by the organisation or other data provided by the individual), the organisation must respond to requests from individuals to exercise their rights under the GDPR (assuming that the organisation has taken reasonable measures to verify their identity and no other exceptions apply). The guidance recognises that the use of personal data with AI may sometimes make it harder to fulfil individual rights but warns that just because it may be harder to fulfil the GDPR obligations in the context of AI, they should not be regarded as manifestly unfounded or excessive. The guidance also provides further detail about how business should comply with specific individual rights requests in the context of AI.

(b) Businesses should consider the requirements necessary to support a meaningful human review of any decisions made by, or with the support of, AI using personal data:

There are specific provisions in the GDPR (particularly Article 22 GDPR) covering individuals' rights where processing involves solely automated individual decision-making, including profiling, with legal or similarly significant effects. Businesses that use such decision-making must tell individuals whose data they are processing that they are doing so for automated decision-making and give them "meaningful information about the logic involved, as well as the significance and the envisaged consequences" of the processing. The ICO and the European Data Protection Board have both previously published detailed guidance on the obligations concerning automated individual decision-making which can be of further assistance.

The GDPR requires businesses to implement suitable safeguards, such as right to obtain human intervention, express their point of view, contest the decision or obtain an explanation about the logic of such decision. The guidance mentions two particular reasons why AI decisions might be overturned: (i) if the individual is an outlier and their circumstances are substantially different from those considered in the training data, and (ii) if the assumptions in the AI model can be challenged, e.g. because of specific design choices. Therefore, businesses should consider the requirements necessary to support a meaningful human review of any solely automated decision-making process (including the interpretability requirements, training of staff and giving them appropriate authority). The guidance from the ICO and The Alan Turning Institute on Explaining decision made with AI considers this issue in further detail (for more information on that guidance, please see our alert here).

In contrast, decisions that are not fully automated but for which the AI system provides support to a human decision-maker do not fall within the scope of Article 22 GDPR. However, the guidance is clear that a decision does not fall outside of the scope of Article 22 just because a human has "rubber-stamped" it and the human decision-maker must have a meaningful role in the decision-making process to take the decision-support tool outside the scope of Article 22.

The guidance also warns that to have a meaningful human oversight also means that businesses need to address the risks of automation bias by human reviewers (i.e. relying on the output generated by the decision-support system and not using their own judgment) and the risks of lack of interpretability (i.e. outputs from AI systems that are difficult for a human reviewer to interpret / understand, for example, in deep-learning AI models). The guidance provides some suggestions how such risks might be addressed, including by considering these risks when designing / procuring the AI systems, by training staff and by effectively monitoring the AI system and the human reviewers.

Conclusion

This guidance from the ICO is another welcome step for the rising number of businesses that use AI systems in their day-to-day operations. It also provides more clarity on how businesses should interpret their data protection obligations as they apply to AI. This is especially important because this area of compliance is attracting the focus of different regulators.

The ICO mentions "monitoring intrusive and disruptive technology" as one of its three focus areas and AI as one of its priorities for its regulatory approach during the COVID-19 pandemic and beyond. As a result, the ICO is also running a free webinar series in autumn 2020 on various topics covered in the guidance to help businesses achieve data protection compliance when using AI systems. The ICO stated on the AI Accountability and Governance webinar on 22 September 2020 that it is currently developing its AI auditing capabilities so it can use its powers to conduct audits of AI systems in the future. However, the ICO staff on the webinar confirmed the ICO would take into account the effect of the COVID-19 pandemic before conducting any AI audits.

Other regulators have also been interested in the implications of AI. For example, the Financial Conduct Authority is working with The Alan Turing Institute on AI transparency in financial markets. Businesses should therefore follow the guidance from their respective regulators and put in place a strategy how to address the data protection (and other) risks associated with using AI systems.

Read the original here:
UK Information Commissioner's Office publishes guidance on artificial intelligence and data protection - Lexology

Censorship and the Dangers of Being Silenced – PRNewswire

With her book Outragesdescribed as"a long-overdue literary investigation into censorship and the life of a tormented trailblazer" by Oprah MagazineWolf chronicles the struggles and eventual triumph of John Addington Symonds, a Victorian-era poet, biographer, and critic who penned what became a foundational text on our modern understanding of human sexual orientation and LGBTQ+ legal rights.

Symonds, as Wolf highlights, was writing at a time when anything interpreted as homoerotic could be used as evidence in trials leading to harsh sentences under British law. Wolf sees a connective thread from those draconian laws of Victorian England to this moment, when marginalized people and groups are being targeted, silenced, and often jailed.

"Naomi Wolf'sOutragesis a vitally important book to discuss right now, not just because of its literary scholarship, which is superb, but because it speaks so clearly to the present societal moment. It's a moment that is incredibly dangerous, a potential turning point," according to Wolf's publisher, Margo Baldwin, of Chelsea Green Publishing.

Naomi Wolf 's most recent books include theNew York TimesbestsellersVagina,The End of America, andGive Me Liberty, in addition to the landmark bestsellerThe Beauty Myth. She lives in the Hudson River Valley.

This free event takes place Thursday, November 5th at 6:30pm through Zoom where registrants will have the opportunity to engage in conversation with the author.

SOURCE Chelsea Green Publishing

https://drnaomiwolf.com

The rest is here:

Censorship and the Dangers of Being Silenced - PRNewswire