Qualtrics Announces Delighted AI, a Machine Learning Engine to Automate Every Step of the Customer Feedback Process – PRNewswire

SALT LAKE CITY, SEATTLE, and PALO ALTO, Calif., Sept. 23, 2020 /PRNewswire/ -- Qualtrics, the leader in customer experience and creator of the experience management category, today announced Delighted AI, an artificial intelligence and machine learning engine built directly into Delighted's customer experience platform. Delighted, a Qualtrics company, developed its AI technology to intelligently automate every aspect of the customer feedback process, from scheduling to analysis and reporting, so that companies can focus on closing feedback loops faster than ever. Delighted AI is complementary to Qualtrics' existing Text iQ enterprise technology for CustomerXM, optimized for Delighted customers.

Today, the most successful customer experience programs are no longer measurement or metrics-based. Over the past few months, Net Promoter Scores have significantly declined in response to COVID-19, exposing customer experience gaps that companies have failed to address or identify. The companies who have emerged as customer experience leaders in the crisis have continuously listened to their customers, and more importantly, responded quickly to their preferences and expectations.

Delighted AI was created based on semantics and themes in the millions of customer feedback responses that Delighted and its customers have analyzed over several years to drive customer experience success.

"Delighted AI helped the right teams at our company understand customer feedback with more precision than ever before, which has been critical in the middle of a pandemic where we need to adapt and respond even more quickly to our customers' needs and expectations," said Roxana Turcanu, Growth Director for Adore Me, a New York-based e-commerce company. "We just recently launched a new try-at-home brand called Outlines, and we were able to do so with the help of Delighted AI by capturing and applying feedback early - this enabled us to pivot, at a rate we've never been able to do, towards what our customers actually wanted from our brand."

Benefits of Delighted AI include:

"Customer experience programs are rapidly evolving as companies have realized that relying on traditional metrics alone does not determine customer success. Instead, the customer experience leaders are winning based on gathering in-the-moment feedback that is immediately actionable and building a culture of continuous listening," said Caleb Elston, co-founder of Delighted. "We created Delighted AI to empower companies to spend less time configuring, implementing, and analyzing so they can focus on acting on insights faster than any other technology or human could before."

Acquired by Qualtrics in 2018, Delighted is one of the fastest and easiest ways to take action on customer feedback, which enables innovative brands and organizations of any size to quickly implement a customer experience program across every channel.

Learn more about Delighted AI here.

About QualtricsQualtrics, the leader in customer experience and creator of the Experience Management (XM) category, is changing the way organizations manage and improve the four core experiences of businesscustomer, employee, product, and brand. Over 11,000 organizations around the world are using Qualtrics to listen, understand, and take action on experience data (X-data)the beliefs, emotions, and intentions that tell you why things are happening, and what to do about it. The Qualtrics XM Platform is a system of action that helps businesses attract customers who stay longer and buy more, engage employees who build a positive culture, develop breakthrough products people love, and build a brand people are passionate about. To learn more, please visit qualtrics.com.

Contact: [emailprotected]

SOURCE Qualtrics

http://www.qualtrics.com

See more here:
Qualtrics Announces Delighted AI, a Machine Learning Engine to Automate Every Step of the Customer Feedback Process - PRNewswire

Impact of Artificial Intelligence on the current education system – Latest Digital Transformation Trends | Cloud News – Wire19

Education can be defined as a process where teachers and students give and receive systematic instructions, respectively. Learning can take place in either a formal or informal setting. More commonly, students receive education in a formal setting such as a high-school, college, university, etc. Education is often considered as a significant determinant of an individuals future success rate. Justifiably, there are various efforts to improve the current education systems in multiple countries worldwide.

Among the many methods employed by various countries to improve the education sector includes the use of AI (Artificial intelligence). AI systems are defined by the use of computers to accomplish tasks that had previously required human intellect. AI utilizes algorithms that collect, classify, organize, and analyze information to conclude it, which is also called machine learning. As such, the use of machine learning has the potential to bring about several benefits for the various industries, including the education system.

Traditional education systems are fast changing to adapt to the technological advancements of todays world. This is especially true with the widespread access to various educational sources of information online. The implementation of educational AI systems has the potential to help students develop their skills and acquire more knowledge in multiple subjects. Therefore, as artificial intelligence continues to evolve, it is our hope that it can help fill the gaps in the education system.

The implementation of AI can improve the efficiency and personalization of learning tasks, as well as streamline administrative tasks. These are benefits enjoyed by students and teachers alike. The implementation of artificial intelligence also helps students to get more time with their respective teachers. This is where unique human qualities are required to supplement where AI would struggle.

AI has altered the students way of learning as they do not need to physically attend classes since they have access to learning material via the internet. As previously mentioned, AI allows educators to spend more time with students by taking over some administrative tasks. However, AI has done so much for education. Below are a few more effects of artificial intelligence on the education industry. They include:

Education should be accessible by everyone regardless of their geographical location. Learning through Artificial intelligence has long been considered as the deciding factor for eliminating geographical boundaries through the facilitation of flexible learning environments globally.

The availability of smart content is a highly debated topic, whereby AI systems can be utilized to offer quality content that is similar to what students buy from some of the best research paper writers online.

AI learning environments can adapt to a students level of skill, mastery of coursework, etc. thus identifying the challenges they face. Accordingly, they provide relevant materials and activities to boost your knowledge base in a specific subject.

You probably already realized that most streaming services offer you a list of shows you are probably going to like, which is an excellent example of AI personalization of your favorite genre of shows. Various other systems can be used in education to cater to the different needs of various students.

Teachers often spend time on administrative duties such as marking exams, reading students assignments, planning the timetable, etc. all of which can be completed by AI systems such as automated assignment processing and grading systems. Thus, teachers get to spend more time with their students.

AI usage, at the very least, reduces the chances of human error delaying specific processes in the learning environment. An excellent example of AI used in school is through the collection of data from various sources and the creation of an accurate forecast to plan for the future effectively.

Besides, the system offers opportunities for international students who either speak different languages or have visual/ hearing defects. For instance, an artificial intelligence system that forms captions in real-time during a presentation. As you can see, the education sector has a lot to gain from the implementation of AI into various systems.

AI systems bring about a world of opportunities to share information globally. Today there are quite a few artificial intelligence systems that help provide a conducive learning environment for all students. The use of AI in learning is quite promising and should be exploited for the benefits it has to offer.

Also read: 9 ways Artificial Intelligence (AI) is impacting education

See original here:
Impact of Artificial Intelligence on the current education system - Latest Digital Transformation Trends | Cloud News - Wire19

At the International Mathematical Olympiad, Artificial Intelligence Prepares to Go for the Gold – Quanta Magazine

The 61st International Mathematical Olympiad, or IMO, begins today. It may go down in history for at least two reasons: Due to the COVID-19 pandemic its the first time the event has been held remotely, and it may also be the last time that artificial intelligence doesnt compete.

Indeed, researchers view the IMO as the ideal proving ground for machines designed to think like humans. If an AI system can excel here, it will have matched an important dimension of human cognition.

The IMO, to me, represents the hardest class of problems that smart people can be taught to solve somewhat reliably, said Daniel Selsam of Microsoft Research. Selsam is a founder of the IMO Grand Challenge, whose goal is to train an AI system to win a gold medal at the worlds premier math competition.

Since 1959, the IMO has brought together the best pre-college math students in the world. On each of the competitions two days, participants have four and a half hours to answer three problems of increasing difficulty. They earn up to seven points per problem, and top scorers take home medals, just like at the Olympic Games. The most decorated IMO participants become legends in the mathematics community. Some have gone on to become superlative research mathematicians.

IMO problems are simple, but only in the sense that they dont require any advanced math even calculus is considered beyond the scope of the competition. Theyre also fiendishly difficult. For example, heres the fifth problem from the 1987 competition in Cuba:

Letnbe an integer greater than or equal to 3. Prove that there is a set ofnpoints in the plane such that the distance between any two points is irrational and each set of three points determines a non-degenerate triangle with rational area.

Like many IMO problems, this one might appear impossible at first.

You read the questions and think, I cant do that, said Kevin Buzzard of Imperial College London, a member of the IMO Grand Challenge team and gold medalist at the 1987 IMO. Theyre extremely hard questions that are accessible to schoolchildren if they put together all the ideas they know in a brilliant way.

Solving IMO problems often requires a flash of insight, a transcendent first step that todays AI finds hard if not impossible.

For example, one of the oldest results in math is Euclids proof from 300 BCE that there are infinitely many prime numbers. It begins with the recognition that you can always find a new prime by multiplying all known primes and adding 1. The proof that follows is simple, but coming up with the opening idea was an act of art.

You cannot get computers to get that idea, said Buzzard. At least, not yet.

The IMO Grand Challenge team is using a software program called Lean, first launched in 2013 by a Microsoft researcher named Leonardo de Moura. Lean is a proof assistant that checks mathematicians work and automates some of the tedious parts of writing a proof.

De Moura and his colleagues want to use Lean as a solver, capable of devising its own proofs of IMO problems. But at the moment, it cannot even understand the concepts involved in some of those problems. If its going to do better, two things need to change.

First, Lean needs to learn more math. The program draws on a library of mathematics called mathlib, which is growing all the time. Today it contains almost everything a math major might know by the end of their second year of college, but with some elementary gaps that matter for the IMO.

The second, bigger challenge is teaching Lean what to do with the knowledge it has. The IMO Grand Challenge team wants to train Lean to approach a mathematical proof the way other AI systems already successfully approach complicated games like chess and Go by following a decision tree until it finds the best move.

If we can get a computer to have that brilliant idea by simply having thousands and thousands of ideas and rejecting all of them until it stumbles on the right one, maybe we can do the IMO Grand Challenge, said Buzzard.

But what are mathematical ideas? Thats surprisingly hard to say. At a high level, a lot of what mathematicians do when they approach a new problem is ineffable.

A key step in many IMO problems is to basically play around with it and look for patterns, said Selsam. Of course, its not obvious how you tell a computer to play around with a problem.

At a low level, math proofs are just a series of very concrete, logical steps. The IMO researchers could try to train Lean by showing it the full details of previous IMO proofs. But at that granular level, individual proofs become too specialized to a given problem.

Theres nothing that works for the next problem, said Selsam.

To help with this, the IMO Grand Challenge team needs mathematicians to write detailed formal proofs of previous IMO problems. The team will then take these proofs and try to distill the techniques, or strategies, that make them work. Then theyll train an AI system to search among those strategies for a winning combination that solves IMO problems its never seen before. The trick, Selsam observes, is that winning in math is much harder than winning even the most complicated board games. In those games, at least you know the rules going in.

Maybe in Go the goal is to find the best move, whereas in math the goal is to find the best game and then to find the best move in that game, he said.

The IMO Grand Challenge is currently a moonshot. If Lean were participating in this years competition, wed probably get a zero, said de Moura.

But the researchers have several benchmarks theyre trying to hit before next years event. They plan to fill in the holes in mathlib so that Lean can understand all of the questions. They also hope to have the detailed formal proofs of dozens of previous IMO problems, which will begin the process of providing Lean with a basic playbook to draw from.

At that point a gold medal may still be far out of reach, but at least Lean could line up for the race.

Right now lots of things are happening, but theres nothing particularly concrete to point to, said Selsam. [Next] year it becomes a real endeavor.

Read the rest here:
At the International Mathematical Olympiad, Artificial Intelligence Prepares to Go for the Gold - Quanta Magazine

UK Information Commissioner’s Office publishes guidance on artificial intelligence and data protection – Lexology

On 30 July, the UK's Information Commissioner's Office ("ICO") published new guidance on artificial intelligence ("AI") and data protection. The ICO is also running a series of webinars to help organisations and businesses to comply with their obligations under data protection law when using AI systems to process personal data. This legal update summarises the main points from the guidance and the AI Accountability and Governance webinar hosted by the ICO on 22 September 2020.

As AI increasingly becomes a part of our everyday lives, businesses worldwide have to navigate the expanding landscape of legal and regulatory obligations associated with the use of AI systems. The ICO guidance recognises that using AI can have undisputable benefits, but that it can also pose risks to the rights and freedoms of individuals. The guidance offers a framework for how businesses can assess and mitigate these risks from a data protection perspective. It also stresses the value of considering data protection at an early stage of AI development, emphasising that mitigation of AI-associated risks should come at the design stage of the AI system.

Although the new guidance is not a statutory code of practice, it represents what the ICO deems to be best practice for data protection-compliant AI solutions and sheds light on how the ICO interprets data protection obligations as they apply to AI. However, the ICO confirmed that businesses might be able use other ways to achieve compliance. The guidance is the result of the ICO consultation on the AI auditing framework which was open for public comments earlier in 2020. It is designed to complement existing AI resources published by the ICO, including the recent Explaining decisions made with AI guidance produced in collaboration with The Alan Turing Institute (for further information on this guidance, please see our alert here) and the Big Data and AI report.

Who is the guidance aimed at and how is the guidance structured?

The guidance can be useful for (i) those undertaking compliance roles within organisations, such as data protection officers, risk managers, general counsel and senior management, and (ii) technology specialists, namely AI developers, data scientists, software developers / engineers and cybersecurity / IT risk managers.

The guidance is split into four sections:

Although the ICO notes that the guidance is written so that each section is accessible for both compliance and technology specialists, the ICO states that sections 1 and 4 are primarily aimed at those in compliance roles, with sections 2 and 3 containing the more technical material.

1. ACCOUNTABILITY AND GOVERNANCE IMPLICATIONS OF AI

The first section of the guidance focuses on the accountability principle, which is one of seven data processing principles in the European General Data Protection Regulation ("GDPR"). The accountability principle requires organisations to be able to demonstrate compliance with data protection laws. Though the ICO acknowledges the ever-increasing technical complexity of AI systems, the guidance highlights that the onus is on organisations to ensure their governance and risk capabilities are proportionate to the organisation's use of AI systems.

The ICO is clear in its message that organisations should not "underestimate the initial and ongoing level of investment and effort that is required" when it comes to demonstrating accountability for use of AI systems when processing personal data. The guidance indicates that senior management should understand and effectively address the risks posed by AI systems, such as through ensuring that appropriate internal structures exist, from policies to personnel, to enable businesses to effectively identify, manage and mitigate those risks.

With respect to AI-specific implications of accountability, the guidance focuses on three areas:

(a) Businesses processing personal data through AI systems should undertake DPIAs:

The ICO has made it clear that a data protection impact assessment ("DPIA") will be required in the vast majority of cases in which an organisation uses an AI system to process personal data because AI systems may involve processing which is likely to result in a high risk to individual's rights and freedoms.

The ICO stresses that DPIAs should not be considered just a box-ticking exercise. A DPIA allows organisations to demonstrate that they are accountable when making decisions with respect to designing or acquiring AI systems. The ICO suggested that organisations might consider having two versions of the DPIA: (i) a detailed internal one which is used by the organisation to help it identify and minimise data protection risk of the project and (ii) an external-facing one which can be shared with individuals whose data is processed by the AI system to help the individuals understand how the AI is making decisions about them.

The DPIA should be considered a living document which gets updated as the AI system evolves (which can be particularly relevant for deep learning AI systems). The guidance notes that where an organisation decides that it does not need to undertake a DPIA with respect to any processing related to an AI system, the organisation will still need to document how it reached such a conclusion.

The guidance provides helpful commentary on a number of considerations which businesses may need to grapple with when conducting a DPIA for AI systems, including guidance on:

The ICO also refers businesses to its general guidance on DPIAs and how to complete them outside the context of AI.

(b) Businesses should consider the data protection roles carried out by different parties in relation to AI systems and put in place appropriate documentation:

The ICO acknowledges that assigning controller / processor roles in respect to AI systems can be inherently complex, given the number of actors involved in the subsequent processing of personal data via the AI system. In this respect, the ICO draws attention to its work on data protection and cloud computing, with revisions to the ICO's Cloud Computing Guidance expected in 2021.

The ICO outlines a number of examples in which organisations take the role of controller / processor with respect to AI systems. The ICO is planning to consult on each of these controller and processor scenarios in the Cloud Computing Guidance review, so organisations can expect further clarity in 2021.

(c) Businesses should put in place documentation for accountability purposes to identify any "trade-offs" when assessing AI-related risks:

The ICO notes that there is a number of "trade-offs" when assessing different AI-related risks. Some common examples of such trade-offs are included in the guidance itself, such as where an organisation wishes to train an AI system capable of producing accurate statistical output on one hand, versus the data minimisation concerns associated with the quantity of personal data required to train such an AI system on the other.

The guidance provides advice to businesses seeking to manage risk associated with such trade-offs. The ICO recommends to put in place effective and accurate documenting processes for accountability purposes, but also for businesses to consider specific instances such as: (i) where an organisation acquires an AI solution and whether the associated trade-offs formed part of the organisation's due diligence processes, (ii) social acceptability concerns associated with certain trade-offs, and (iii) whether mathematical approaches can mitigate trade-off associated privacy risk.

2. ENSURING LAWFULNESS, FAIRNESS AND TRANSPARENCY IN AI SYSTEMS

The second section of the guidance focuses on ensuring lawfulness, fairness and transparency in AI systems and covers three main areas:

(a) Businesses should identify the purpose and an appropriate lawful basis for each processing operation in an AI system:

The guidance makes it clear that organisations must identify the purpose and an appropriate lawful basis for each processing operation in an AI system and specify these in their privacy notice.

It adds that it might be more appropriate to choose different lawful bases for the development and deployment phases of an AI system. For example, while performance of a contract might be an appropriate ground for processing personal data to deploy an AI system (e.g. to provide a quote to a customer before entering into a contract), it is unlikely that relying on this basis would be appropriate to develop an AI system.

The guidance makes it clear that legitimate interests provide the most flexible lawful basis for processing. However, if businesses rely on it, they are taking on an additional responsibility for considering and protecting people's rights and interests and must be able to demonstrate the necessity and proportionality of the processing through a legitimate interests assessment.

The guidance mentions that consent may be an appropriate lawful basis but individuals must have a genuine choice and be able to withdraw the consent as easily as they give it.

It might also be possible to rely on legal obligation as a lawful basis for auditing and testing the AI system if businesses are able to identify the specific legal obligation they are subject to (e.g. under the Equality Act 2010). However, it is unlikely to be appropriate for other uses of that data.

If the AI system processes special category or criminal convictions data, then the organisation will also need to ensure compliance with additional requirements in the GDPR and the Data Protection Act 2018.

(b) Businesses should assess the effectiveness of the AI system in making statistically accurate predictions about individuals:

The guidance notes that organisations should assess the merits of using a particular AI system in light of its effectiveness in making statically accurate and therefore valuable predications. In particular, organisations should monitor the system's precision and sensitivity. Organisations should also prioritise avoiding certain kind of errors based on the severity and nature of the particular risk.

Businesses should agree regular updates (retraining of the AI system) and reviews of statistical accuracy to guard against changing data, for example, if the data originally used to train the AI system is no longer reflective of the current users of the AI systems.

(c) Businesses should address the risks of bias and discrimination in using an AI system:

AI systems may learn from data which may be imbalanced (e.g. because the proportion of different genders in the training data is different than in the population using the AI system) and / or reflect past discrimination (e.g. if in the past, male candidates were invited more often to job interviews) which could lead to producing outputs which have discriminatory effect on individuals. The guidance makes it clear that obligations relating to discrimination under data protection law is separate and additional to organisations' obligations under the Equality Act 2010.

The guidance mentions various approaches developed by computer scientists studying algorithmic fairness which aim to mitigate AI-driven discrimination. For example, in cases of imbalanced training data, it may be possible to balance it out by adding or removing data about under/over-represented subsets of the population. In cases where the training data reflects past discrimination, the data may be manually modified, the learning process could be adapted to reflect this, or the model can be modified after training. However, the guidance warns that in some cases, simply retraining the AI model with a more diverse training set may not be sufficient to mitigate its discriminatory impact and additional steps might need to be taken.

The guidance recommends that businesses put in place policies and good practices to address risks related to bias and discrimination and undertake robust testing of the AI system on an ongoing basis against selected key performance metrics.

3. SECURITY ASSESSMENT AND DATA MINIMISATION IN AI SYSTEMS

The third section of the guidance is aimed at technical specialists and covers two main issues:

(a) Businesses should assess the security risks AI introduces and take steps to manage the risks of privacy attacks on AI systems:

AI systems introduce new kinds of complexity not found in more traditional IT systems. AI systems might also rely heavily on third party code and are often integrated with several other existing IT components. This complexity might make it more difficult to identify and manage security risks. As a result, businesses should ensure that they actively monitor and take into account the state-of-the-art security practices when using personal data in an AI context. Businesses should use these practices to assess AI systems for security risks and ensure that their staff have appropriate skills and knowledge to address these security risks. Businesses should also ensure that their procurement process includes sufficient information sharing between the parties to perform these assessments.

The guidance warns against two kinds of privacy attacks which allow the attacker to infer personal data of the individuals used to train the AI system:

The guidance then suggests some practical technical steps that businesses can take to manage the risks of such privacy attacks.

The guidance also warns against novel risks, such as adversarial examples which allow attackers to feed modified inputs into an AI model that will be misclassified by the AI system. The ICO notes that in some cases this could lead to a risk to the rights and freedom of individuals (e.g. if a facial recognition system is tricked to misclassify an individual for someone else). This would raise issues not only under data protection laws but possibly also under the Network and Information Systems (NIS) Directive.

(b) Business should take steps to minimise personal data when using AI systems and adopt appropriate privacy-enhancing methods:

AI systems generally require large amounts of data but the GDPR data minimisation principle requires business to identify the minimum amount of personal data they need to fulfil their purposes. This can create some tensions but the guidance suggests steps businesses can take to ensure that the personal data used by the AI system is "adequate, relevant and limited".

The guidance recommends that individuals accountable for the risk management and compliance of AI systems are familiar with techniques such as: perturbation (i.e. adding 'noise' to data), using synthetic data, adopting federated learning, using less "human readable" formats, making inferences locally rather than on a central server, using privacy-preserving query approaches, and considering anonymisation and pseudonymisation of the personal data. The guidance goes into some detail for each of these techniques and explains when they might be appropriate.

Importantly, ensuring security and data minimisation in AI systems is not a static process. The ICO suggests that compliance with data protection obligations requires ongoing monitoring of trends and developments in this area and being familiar with and adopting the latest security and privacy-enhancing techniques for AI systems. As a result, any contractual documentation that businesses put in place with service providers should take these privacy concerns into account.

4. INDIVIDUAL RIGHTS IN AI SYSTEMS

The final section of the guidance is aimed at compliance specialists and covers two main areas:

(a) Businesses must comply with individual rights requests in relation to personal data in all stages of the AI lifecycle, including training data, deployment data and data in the model itself:

Under the GDPR, individuals have a number of rights relating to their personal data. The guidance states that these rights apply wherever personal data is used at any of the various stages of the AI lifecycle from training the AI model to deployment.

The guidance is clear that even if the personal data is converted into a form that makes the data potentially much harder to link to a particular individual, this is not necessarily considered sufficient to take the data out of scope of the data protection law because the bar for anonymisation of personal data under the GDPR is high.

If it possible for an organisation to identify an individual in the data, directly or indirectly (e.g. by combining it with other data held by the organisation or other data provided by the individual), the organisation must respond to requests from individuals to exercise their rights under the GDPR (assuming that the organisation has taken reasonable measures to verify their identity and no other exceptions apply). The guidance recognises that the use of personal data with AI may sometimes make it harder to fulfil individual rights but warns that just because it may be harder to fulfil the GDPR obligations in the context of AI, they should not be regarded as manifestly unfounded or excessive. The guidance also provides further detail about how business should comply with specific individual rights requests in the context of AI.

(b) Businesses should consider the requirements necessary to support a meaningful human review of any decisions made by, or with the support of, AI using personal data:

There are specific provisions in the GDPR (particularly Article 22 GDPR) covering individuals' rights where processing involves solely automated individual decision-making, including profiling, with legal or similarly significant effects. Businesses that use such decision-making must tell individuals whose data they are processing that they are doing so for automated decision-making and give them "meaningful information about the logic involved, as well as the significance and the envisaged consequences" of the processing. The ICO and the European Data Protection Board have both previously published detailed guidance on the obligations concerning automated individual decision-making which can be of further assistance.

The GDPR requires businesses to implement suitable safeguards, such as right to obtain human intervention, express their point of view, contest the decision or obtain an explanation about the logic of such decision. The guidance mentions two particular reasons why AI decisions might be overturned: (i) if the individual is an outlier and their circumstances are substantially different from those considered in the training data, and (ii) if the assumptions in the AI model can be challenged, e.g. because of specific design choices. Therefore, businesses should consider the requirements necessary to support a meaningful human review of any solely automated decision-making process (including the interpretability requirements, training of staff and giving them appropriate authority). The guidance from the ICO and The Alan Turning Institute on Explaining decision made with AI considers this issue in further detail (for more information on that guidance, please see our alert here).

In contrast, decisions that are not fully automated but for which the AI system provides support to a human decision-maker do not fall within the scope of Article 22 GDPR. However, the guidance is clear that a decision does not fall outside of the scope of Article 22 just because a human has "rubber-stamped" it and the human decision-maker must have a meaningful role in the decision-making process to take the decision-support tool outside the scope of Article 22.

The guidance also warns that to have a meaningful human oversight also means that businesses need to address the risks of automation bias by human reviewers (i.e. relying on the output generated by the decision-support system and not using their own judgment) and the risks of lack of interpretability (i.e. outputs from AI systems that are difficult for a human reviewer to interpret / understand, for example, in deep-learning AI models). The guidance provides some suggestions how such risks might be addressed, including by considering these risks when designing / procuring the AI systems, by training staff and by effectively monitoring the AI system and the human reviewers.

Conclusion

This guidance from the ICO is another welcome step for the rising number of businesses that use AI systems in their day-to-day operations. It also provides more clarity on how businesses should interpret their data protection obligations as they apply to AI. This is especially important because this area of compliance is attracting the focus of different regulators.

The ICO mentions "monitoring intrusive and disruptive technology" as one of its three focus areas and AI as one of its priorities for its regulatory approach during the COVID-19 pandemic and beyond. As a result, the ICO is also running a free webinar series in autumn 2020 on various topics covered in the guidance to help businesses achieve data protection compliance when using AI systems. The ICO stated on the AI Accountability and Governance webinar on 22 September 2020 that it is currently developing its AI auditing capabilities so it can use its powers to conduct audits of AI systems in the future. However, the ICO staff on the webinar confirmed the ICO would take into account the effect of the COVID-19 pandemic before conducting any AI audits.

Other regulators have also been interested in the implications of AI. For example, the Financial Conduct Authority is working with The Alan Turing Institute on AI transparency in financial markets. Businesses should therefore follow the guidance from their respective regulators and put in place a strategy how to address the data protection (and other) risks associated with using AI systems.

Read the original here:
UK Information Commissioner's Office publishes guidance on artificial intelligence and data protection - Lexology

Censorship and the Dangers of Being Silenced – PRNewswire

With her book Outragesdescribed as"a long-overdue literary investigation into censorship and the life of a tormented trailblazer" by Oprah MagazineWolf chronicles the struggles and eventual triumph of John Addington Symonds, a Victorian-era poet, biographer, and critic who penned what became a foundational text on our modern understanding of human sexual orientation and LGBTQ+ legal rights.

Symonds, as Wolf highlights, was writing at a time when anything interpreted as homoerotic could be used as evidence in trials leading to harsh sentences under British law. Wolf sees a connective thread from those draconian laws of Victorian England to this moment, when marginalized people and groups are being targeted, silenced, and often jailed.

"Naomi Wolf'sOutragesis a vitally important book to discuss right now, not just because of its literary scholarship, which is superb, but because it speaks so clearly to the present societal moment. It's a moment that is incredibly dangerous, a potential turning point," according to Wolf's publisher, Margo Baldwin, of Chelsea Green Publishing.

Naomi Wolf 's most recent books include theNew York TimesbestsellersVagina,The End of America, andGive Me Liberty, in addition to the landmark bestsellerThe Beauty Myth. She lives in the Hudson River Valley.

This free event takes place Thursday, November 5th at 6:30pm through Zoom where registrants will have the opportunity to engage in conversation with the author.

SOURCE Chelsea Green Publishing

https://drnaomiwolf.com

The rest is here:

Censorship and the Dangers of Being Silenced - PRNewswire

ON THE SAME PAGE: Banned Books Week highlights censorship and freedom – Manistee News Advocate

Kim Jankowiak and Becca Brown, Manistee County Library

By Kim Jankowiak and Becca Brown, Manistee County Library

By Kim Jankowiak and Becca Brown, Manistee County Library

By Kim Jankowiak and Becca Brown, Manistee County Library

By Kim Jankowiak and Becca Brown, Manistee County Library

ON THE SAME PAGE: Banned Books Week highlights censorship and freedom

"Censorship is a dead end. Find your freedom to read." This is the theme for Banned Books Week, Sept. 27 through Oct. 3, which has been celebrated annually since 1982.

The American Library Association website states, Typically held during the last week of September, it spotlights current and historical attempts to censor books in libraries and schools. It brings together the entire book community librarians, booksellers, publishers, journalists, teachers, and readers of all types in shared support of the freedom to seek and to express ideas, even those some consider unorthodox or unpopular.

Manistee County Library carries many of the banned, or challenged titles to fulfill its obligation to have something for everyone.

The Handmaids Tale by Margaret Atwood has been challenged for vulgarity, sexual overtones and profanity. It was nominated for five awards, won two and was adapted into a film, an opera and a television series. The sequel was published in 2019.

City of Thieves by David Benioff was removed from a Florida High School due to vulgar language. It was originally assigned to high school students as an assignment with an alternate title available. A parent complained and this title was put on the banned book list.

John Knowles A Separate Peace has received many challenges from parents due to language. This title won the William Faulkner Foundation Award in 1961.

Beartown by Fredrik Backman was also challenged after being assigned to students. Parents found the content vulgar graphic and just unnecessary.

Fahrenheit 451 the Ray Bradbury classic has also come under fire when assigned because of profanity and using Gods name in vain. In 2018, a review board evaluation chose to retain the book. Some students plan to petition again.

The Hate U Give by Angie Thomas was on the New York Times bestseller list. It has been challenged because it is viewed as almost indoctrination of distrust of police as well as drug use, profanity and offensive language.

John Greens Looking for Alaska was named the most challenged book of 2015. Complaints were given for offensive language and explicit sex. The author posted a video on YouTube, pointing out that the entire book needs to be read, not just random passages. Text is meaningless without context."

Mariko Tamaki wrote This One Summer which received a Printz Honor and was the first graphic novel to receive a Caldecott Honor. Another award-winning title that was challenged was Drama by Raina Telgemeier. Both were targeted because of LGBT characters and sexual content.

Nonfiction has also been challenged.

The Immortal Life of Henrietta Lacks by Rebecca Skloot tells the story of race, medicine, equality and education. When assigned to be read by a student, a parent challenged it as pornographic.

Foul language and explicit and disturbing material were the reasons given for challenging Glass Castle, a memoir by Jeannette Walls. A parent contended that students shouldnt be taught controversial issues but should choose books that inspire our children to greatness.

The Holocaust memoir, Night by Elie Wiesel was challenged for profanity, violence and horror.

The First Amendment guarantees the right to freedom of speech and expression among other aspects; and Banned Book Week gives libraries, teachers, booksellers and readers an opportunity to support authors as they share a vast range of stories.

A list of banned books is available at ala.org/advocacy/bbooks.

Originally posted here:

ON THE SAME PAGE: Banned Books Week highlights censorship and freedom - Manistee News Advocate

Social media censorship in Egypt targets women on TikTok – The Week Magazine

Looking at Haneen Hossam's TikTok account, one might wonder why her content landed the Egyptian social media user in jail. In one post, she explains for her followers the Greek mythological story of Venus and Adonis, which is also a Shakespeare poem.

Mawada al-Adham does similarly anodyne things that are familiar to anyone who observes such social influencers, like giving away iPhones and driving a fancy car.

They are just two of the nine women arrested in Egypt this year for what they posted on TikTok. Mostly, their videos are full of dancing to Arabic songs, usually a genre of electro-pop, Egyptian sha'abi folk music called mahraganat, or festival tunes. The clips feature a typically TikTok style with feet planted, hands gesticulating and eyebrows emoting.

Meanwhile, the Trump administration has put TikTok and its Chinese parent company, ByteDance, in its sights with another escalation against Beijing. The U.S. Commerce Department announced in September that TikTok, and another Chinese-owned app, WeChat, would be blocked from U.S. app stores.

In Egypt, the arrests are about dictating morality rather than any kind of geopolitical struggle or international tech rivalry. But what exactly the government finds legally objectionable about these women's online content is ambiguous.

"They themselves would have never imagined that they would go to jail and be sentenced for what they were doing because what they're doing is basically what everyone else does on social media," said Salma El Hosseiny of the International Service for Human Rights, a nongovernmental organization based in Geneva. "Singing and dancing as if you would at an Egyptian wedding, for example."

Hosseiny said that these women were likely targeted because they're from middle- or working-class backgrounds and dance to a style of music shunned by the bourgeoisie for scandalous lyrics that touch on taboo topics.

"You have social media influencers who come from elite backgrounds, or upper-middle class, or rich classes in Egypt, who would post the same type of content. These women are working-class women," she added. "They have stepped out of what is permitted for them."

Criminalizing the internet

They were charged under a cybercrime law passed in 2018, as well as existing laws in the Egyptian Penal Code that have been employed against women in the past.

Yasmin Omar, a researcher at The Tahrir Institute for Middle East Policy in Washington, said the cybercrime law is vague when it comes to defining what's legal and what isn't.

"It was written using very broad terms that could be very widely interpreted and criminalizing a lot of acts that are originally considered as personal freedom," she said. "Looking at it, you would see that anything you might post on social media, anything that you may use [on] the internet could be criminalized under this very wide umbrella."

Egypt's cybercrime law is part of a larger effort by the government to increase surveillance of online activities. As TikTok became much more popular during the pandemic, prosecutors started looking there too, Omar said.

"When I write anything on my social media accounts, I know that it could be seen by an official whose job it is to watch the internet and media platforms," said Omar, who added that that surveillance often leads to widespread repression.

"The state is simply arresting whoever says anything that criticizes its policy, its laws, its practices ... even if it's just joking. It's not even allowed."

The arrests of TikTokers shows that this law isn't just about monitoring and controlling political dissent, but is used to police conservative social norms.

Menna Abdel Aziz, 17, made a live video on Facebook. Her face was bruised and she told viewers that she had been raped and was asking for help.

The police asked her to come in, and when she did, Omar said, they looked at her TikTok account and decided she was inciting debauchery and harming family values in Egypt essentially blaming the victim for what had occurred.

This past summer, there were a number of particularly shocking allegations involving rape and sexual assault in Egypt. First, dozens of women accused a young man at the American University in Cairo (AUC) of sexual violence ranging from blackmail to rape. And in another case, a group of well-connected men were accused of gang-raping a young woman in Cairo's Fairmont Hotel in 2014 and circulating a video of the act.

The cases garnered a lot of attention within Egypt. Many Egyptian women were shocked by the horrible details of the cases but not surprised about the allegations or that the details had been kept under wraps for so long.

"In Egypt, sexual violence and violence against women is systematic," Hosseiny said. "It's part of the daily life of women to be sexually harassed."

'To go after women'

A UN Women report in 2014 said that 99.3 percent of Egyptian women reported being victims of sexual harassment. Yet, women are often culturally discouraged from reporting sexual harassment in the traditional society.

"They are investing state resources to go after women who are singing and dancing on social media, and trying to control their bodies, and thinking that this is what's going to make society better and a safer place," Hosseiny said, "by locking up women, rather than by changing and investing in making Egypt a safe place for women and girls."

When prosecutors started investigating the accused in that high-profile Fairmont case, it looked like real progress and a victory for online campaigning by women. The state-run National Council for Women even encouraged the victim and witnesses to come forward, promising the women protection. But that pledge by the state did not materialize.

"Somehow, the prosecution decided to charge the witnesses," said Omar, the researcher. "Witnesses who made themselves available, made their information about their lives, about what they know about the case all this information was used against them."

Once again, Egyptian authorities looked at the women's social media accounts, and then investigated the women for promoting homosexuality, drug use, debauchery, and publication of false news. One of the witnesses arrested is an American citizen.

When pro-state media outlets weighed in on the TikTok cases, they also had a message about blame, Hosseiny said. The coverage used sensational headlines and showed photos of the women framed in a sexual way. This contrasted with the depictions in rape cases in which the accused men's photos were blurred and only their initials printed.

Social media has played an important role in Egyptian politics during the last decade. In 2011, crowds toppled the regime of military dictator Hosni Mubarak. That uprising was in part organized online with Twitter and Facebook. In 2018, the former army general, and current president, Abdel Fattah al-Sisi, said he would maintain stability in Egypt.

"Beware! What happened seven years ago is never going to happen again in Egypt," he swore to a large auditorium full of officials.

Samer Shehata, a professor at the University of Oklahoma, said Egypt's military-backed regime is wary of the implications of anything posted online, even if it's just dancing.

"I think there has been a heightened paranoia as a result of hysteria ... about the possible political consequences of social media," he said. "I think that they certainly have those kinds of concerns in the back of their minds as well."

Of the nine women charged with TikTok crimes, four have been convicted and three have appeals set for October.

Menna Abdel Aziz, the young woman who called for help online, was recently released from detainment and is being dismissed with no charges.

This article originally appeared at PRI's The World.

Read more here:

Social media censorship in Egypt targets women on TikTok - The Week Magazine

Facebook plans political censorship in anticipation of chaos and violence in the 2020 US elections – WSWS

By Kevin Reed 23 September 2020

The social media monopoly Facebook is preparing to take exceptional measures including aggressive action to restrict the circulation of content on its network if the 2020 US elections on November 3 result in chaos or violence.

In an interview with Facebook Vice President of Global Affairs and Communications Nick Clegg, the Financial Times reported that the social media corporation had drawn up plans for handling a range of outcomes including widespread civic unrest or other unprecedented political dilemmas during the counting of in-person and mail-in ballots.

Clegg told FT, There are some break-glass options available to us if there really is an extremely chaotic and, worse still, violent set of circumstances. Although he did not reveal any details about Facebooks planned responses, Clegg referenced the actions taken by the worlds number one social media platform previously in countries where social unrest erupted.

Clegg said, We have acted aggressively in other parts of the world where we think that there is real civic instability and we obviously have the tools to do that [again], adding that the company had taken pretty exceptional measures to significantly restrict the circulation of content on our platform. FT said Clegg was referring to the actions taken by Facebook to reduce the content reach of malicious actors and repeated rule breakers during recent periods of unrest in Sri Lanka and Myanmar.

The Right Honorable Sir Nick Clegg is a leading political figure in the UK. He was the Deputy Prime Minister under Prime Minister David Cameron (2010-2015) and leader of the Liberal Democrats from 2007-2015. He was hired by Facebook in October 2018 as a lobbyist and chief international public relations officer.

While the FT report emphasized concerns about how President Trump would use social media to interfere in the process of the elections or contest the results or call for violent protest, potentially triggering a constitutional crisis, the real fear for Facebook and Clegg is that there will be a mass response to the election crisis that will move outside of the US two-party political establishment.

Significantly, FT says, Facebook has been exploring how to handle about 70 different potential scenarios, according to a person familiar with the situation, with staff including world-class military scenario planners. In other words, Facebook is collaborating with military-intelligence and bracing for the eruption of mass social unrest in the US during the 2020 elections.

Clegg also said that any extraordinary measures taken by Facebook will fall to a team of top executives including himself and chief operating officer Sheryl Sandbergwith chief executive Mark Zuckerberg holding the right to overrule positions. He said, Weve slightly reorganized things such that we have a fairly tight arrangement by which decisions are taken at different levels [depending on] the gravity of the controversy attached.

If true, the description by Clegg makes clear that decisions to utilize the unprecedented power of Facebook political censorship in the hands of a relatively small number of corporate executives. Clegg added that Facebook was committing a significant amount of resources at its Election Operations Center and voter information hub.

Facebook has launched an infrastructure within their platform that they have characterized as the largest voting information campaign in American history. At the top of the priority list of this information campaign is election security and fighting interference which includes teams of more than 35,000 people.

Facebook says it is increasing its coordination with law enforcement agencies like the FBI and the Department of Homeland Security, and with state officials, civil society groups, and other technology companies. In explaining how they will prevent election interference, Facebook states that its security teams will identify suspicious activity and take down coordinated networks of inauthentic accounts, Pages and Groups that seek to manipulate public debate.

It should be pointed out that Facebooks election information infrastructure has been built in collaboration with the Bipartisan Policy Center (BPC). BPC is a Washington, D.C. think tank founded in 2007 by former leading US political figures Howard Baker and Bob Dole (Republicans) and Tom Daschle and George Mitchell (Democrats) to preserve and defend the US two-party system amid growing conflict within the political establishment.

Meanwhile, the FBI is spreading its own misinformation in advance of the elections. In a Public Service Announcement on Tuesday, the FBI said that foreign actors and cybercriminals could create new websites, change existing websites, and create or share corresponding social media content to spread false information in an attempt to discredit the electoral process and undermine confidence in US democratic institutions.

The latest news and information about Facebooks censorship plans for the US elections confirms the warnings issued previously by the World Socialist Web Site about the meaning of the ongoing collaboration of the tech giants with the intelligence state during both the pandemic and on election security. The secret meetings being held with White House officials and federal police and intelligence agencies were preparing the censorship regime that is now at least partially being made public.

Going back to 2016, there has been a steady stream of unproven allegations of foreign interference by the Russians, the Iranians and the Chinese in the US elections. In reality, the threat to US democratic institutions comes from the Trump administration and the refusal of the Democrats to do anything about it. Additionally, the confidence of the public in these institutions is being undermined each day by the grotesque social inequality between the super rich who control the Democrats and Republicans and the reality of life facing millions of people under the capitalist system.

The degree of emergency censorship planning by Facebookin concert with police agencies and the surveillance stateis a measure of the awareness within ruling circles of the potential for an eruption of mass struggles by the working class and youth in the US after what will be on election day nearly nine months of economic and social crisis triggered by the coronavirus pandemic.

The ruling establishment is well aware that social media platforms such as Facebook are being utilized to organize the mass protests across the country against police violence as well as the growing opposition within the working class to returning to work and school under the unsafe health conditions of the pandemic. Above all, the censorship efforts are aimed at preventing the socialist political analysis and program of the World Socialist Web Site from reaching the working class under conditions of mass protests and a constitutional crisis in November.

While the open threats by Donald Trump to discredit or outright reject the results of the election and refuse to leave office should be taken seriously, workers and young people cannot place an ounce of confidence in the campaign of Biden-Harris or the Democratic Party to defend the constitution or uphold democratic forms of rule in the US. The working class must intervene independently of both parties of the corporate and financial elite, take matters into its own hands and fight for the program of revolutionary socialism represented only by the SEP in the 2020 elections with its candidates Joseph Kishore for US President and Norissa Santa Cruz for US Vice President.

The author also recommends:

White House bans TikTok and WeChat: A major intensification of internet censorship [19 September 2020]

Facebook announces political censorship plan in advance of US presidential election [7 September 2020]

Big tech firms meet with US national security agencies in advance of November elections [14 August 2020]

Google is blocking the World Socialist Web Site from search results.

To fight this blacklisting:

Read the rest here:

Facebook plans political censorship in anticipation of chaos and violence in the 2020 US elections - WSWS

Banned Books Week: Milner Celebrates the Freedom to Read – Illinois State University News

Check out Banned Books Week September 27 through October 3 with Milner Library

During the week of September 27through October 3, Milner Library is joining libraries nationally to celebrateBanned Books Week. This annual observance promotes our freedom to read as well as highlights past and present challenges to censor books in libraries and schools. The national theme this year is Censorship is a Dead End. Find your Freedom to Read.Milner Library values free and open access to information, and we hope this weeks event will inspire folks to consider the impact of censorship.

Throughout the week, Milners social media will highlight books that have been challenged or banned at libraries and schools throughout the country. Additionally, check out thiscollection of commonly banned or challenged books that you can borrow from the library to celebrate your freedom to read. This includes the list of 2019s top ten most challenged books as determined by data gathered throughout the year by the American Library Associations Office for Intellectual Freedom.

When a book is challenged, it means there is an attempt to remove or restrict materials based upon the objections of a person or group. A banned book means that the materials were removed. Books are still challenged today, although in the majority of cases books have remained available thanks in part to the efforts of librarians, teachers, students, and community members who stand up for our freedom to read.Learn more about the history of Banned Books Week.

Join Milner on Facebook, Twitter, and Instagram as we explore some of the banned books and celebrating the freedom to read! Interested in learning more about banned and challenged books? Check out this FAQ from the American Library Association.

Read more from the original source:

Banned Books Week: Milner Celebrates the Freedom to Read - Illinois State University News

The View co-host Sunny Hostin accuses ABC of racist censorship – TheGrio

Sunny Hostin detailed claims of racism from ABC in her new book and shared how the network attempted to have the narrative removed.

Read More: Maryland congressional candidate Kim Klacik accuses The Views Joy Behar of wearing blackface

In the forward for her new memoir, I Am These Truths, the lawyer opened up on the experiences at the network including a racist incident that resulted in the firing of an executive. In June, Huffpost published a report claiming Barbara Fedida, an ABC News executive in charge of talent, made multiple insensitive remarks toward Black network talent such as Robin Roberts, and was the subject of over a dozen human resource complaints.

The Los Angeles Times reported Fedida allegedly used the term low rentto describe Hostin. After an investigation, the executive lost her job, however, the damage was already done. The View co-host used her platform on the show to describe the feeling of being targeted by racist comments.

It was a tough weekend for me, and I was really disappointed and saddened and hurt when I learned about the racist comments that were made, allegedly, about me, my colleagues, and my dear friends, Hostin said. Because, if true, to reference Robin Roberts, who is one of the most respected and beloved journalists in our country, as picking cotton, to reference me, someone whos been very open about having grown up in public housing, as being low rent tells me that systemic racism touches everything and everyone in our society regardless of social stature.

Read More: Sherri Shepherd says The View was the most painful experience Ive ever gone through

Hostin expanded on these feelings in her books forward. Entertainment Tonight reports at the time of the expose, her memoir had already gone to the publisher. She called her agent and decided to add her reality deliberately to the books forward.

Ive got a book coming out. And the book had already gone to the publisher. And I called my agent and I said, Ive got to write about this and I want it to be at the very beginning of the book. Because this is my truth as I sit here today Because this is the truth that Im living right now. And if thats gonna help any woman, help anybody thats going through this during this time in our country, I gotta do it. And he said, You better do it. And I literally wrote that foreword in about 15 to 20 minutes, she said to the outlet.

According to The Daily Beast, she penned claims that she made less than her White counterparts, and initially had a dressing room on a different floor from therest of the cast.

Her proclaimed truth was not told without pushback. Hostin sought legal aid when ABC pushed back against segments of her book.

I was surprised that what was asked of me was to change the truth, to change my story, Hostin remarked on Andy Cohen Live on Monday.

I think its one thing if I got something wrong and, to be clear, they caught things that were wrong. Timing things, and direct quotes that should have been checked more closely. And I appreciated those things, but then they wanted me to change, things like things that I experienced. Discriminatory things, and I just felt that that wasnt fair because the title of the book is I Am These Truths, she continued.

Hostin revealed the racist sentiments in ABCs alleged attempted censorship in the forward.

My television agent and my book agent emailed me to express confusion that a news organization would try to censor a Puerto Rican, African American womans story while they were covering global demonstrations demanding racial equity, the forward stated.

One of them even calculated the percentages of people of color on the executive boards at Disney, ABC Entertainment, and ABC Newsaccording to him those figures ranged from 7 to 12 percent. I asked my attorneys to intervene and thankfully ABC relented. I didnt want to believe that racism played a part in their revision requestswe were just dotting some is and crossing some ts, right?

Beyond the ABC saga, I Am These Truths explores her Puerto Rican and Black upbringing in the Bronx, and her professional journey as a federal prosecutor and journalist. Hostin shared with Bustle she was nervous to pen a memoir and received encouragement from Supreme Court Justice Sonia Sotomayor.

When I had [the memoir] in front of me, I constantly had these moments when I realized, Wow, this experience was not great. That was a failure, but I turned that failure into a lesson, which is an important tool, she said to the outlet.

Have you subscribed totheGrios podcastDear Culture? Download our newest episodes now!

See the original post:

The View co-host Sunny Hostin accuses ABC of racist censorship - TheGrio