UK Information Commissioner’s Office publishes guidance on artificial intelligence and data protection – Lexology

On 30 July, the UK's Information Commissioner's Office ("ICO") published new guidance on artificial intelligence ("AI") and data protection. The ICO is also running a series of webinars to help organisations and businesses to comply with their obligations under data protection law when using AI systems to process personal data. This legal update summarises the main points from the guidance and the AI Accountability and Governance webinar hosted by the ICO on 22 September 2020.

As AI increasingly becomes a part of our everyday lives, businesses worldwide have to navigate the expanding landscape of legal and regulatory obligations associated with the use of AI systems. The ICO guidance recognises that using AI can have undisputable benefits, but that it can also pose risks to the rights and freedoms of individuals. The guidance offers a framework for how businesses can assess and mitigate these risks from a data protection perspective. It also stresses the value of considering data protection at an early stage of AI development, emphasising that mitigation of AI-associated risks should come at the design stage of the AI system.

Although the new guidance is not a statutory code of practice, it represents what the ICO deems to be best practice for data protection-compliant AI solutions and sheds light on how the ICO interprets data protection obligations as they apply to AI. However, the ICO confirmed that businesses might be able use other ways to achieve compliance. The guidance is the result of the ICO consultation on the AI auditing framework which was open for public comments earlier in 2020. It is designed to complement existing AI resources published by the ICO, including the recent Explaining decisions made with AI guidance produced in collaboration with The Alan Turing Institute (for further information on this guidance, please see our alert here) and the Big Data and AI report.

Who is the guidance aimed at and how is the guidance structured?

The guidance can be useful for (i) those undertaking compliance roles within organisations, such as data protection officers, risk managers, general counsel and senior management, and (ii) technology specialists, namely AI developers, data scientists, software developers / engineers and cybersecurity / IT risk managers.

The guidance is split into four sections:

Although the ICO notes that the guidance is written so that each section is accessible for both compliance and technology specialists, the ICO states that sections 1 and 4 are primarily aimed at those in compliance roles, with sections 2 and 3 containing the more technical material.

1. ACCOUNTABILITY AND GOVERNANCE IMPLICATIONS OF AI

The first section of the guidance focuses on the accountability principle, which is one of seven data processing principles in the European General Data Protection Regulation ("GDPR"). The accountability principle requires organisations to be able to demonstrate compliance with data protection laws. Though the ICO acknowledges the ever-increasing technical complexity of AI systems, the guidance highlights that the onus is on organisations to ensure their governance and risk capabilities are proportionate to the organisation's use of AI systems.

The ICO is clear in its message that organisations should not "underestimate the initial and ongoing level of investment and effort that is required" when it comes to demonstrating accountability for use of AI systems when processing personal data. The guidance indicates that senior management should understand and effectively address the risks posed by AI systems, such as through ensuring that appropriate internal structures exist, from policies to personnel, to enable businesses to effectively identify, manage and mitigate those risks.

With respect to AI-specific implications of accountability, the guidance focuses on three areas:

(a) Businesses processing personal data through AI systems should undertake DPIAs:

The ICO has made it clear that a data protection impact assessment ("DPIA") will be required in the vast majority of cases in which an organisation uses an AI system to process personal data because AI systems may involve processing which is likely to result in a high risk to individual's rights and freedoms.

The ICO stresses that DPIAs should not be considered just a box-ticking exercise. A DPIA allows organisations to demonstrate that they are accountable when making decisions with respect to designing or acquiring AI systems. The ICO suggested that organisations might consider having two versions of the DPIA: (i) a detailed internal one which is used by the organisation to help it identify and minimise data protection risk of the project and (ii) an external-facing one which can be shared with individuals whose data is processed by the AI system to help the individuals understand how the AI is making decisions about them.

The DPIA should be considered a living document which gets updated as the AI system evolves (which can be particularly relevant for deep learning AI systems). The guidance notes that where an organisation decides that it does not need to undertake a DPIA with respect to any processing related to an AI system, the organisation will still need to document how it reached such a conclusion.

The guidance provides helpful commentary on a number of considerations which businesses may need to grapple with when conducting a DPIA for AI systems, including guidance on:

The ICO also refers businesses to its general guidance on DPIAs and how to complete them outside the context of AI.

(b) Businesses should consider the data protection roles carried out by different parties in relation to AI systems and put in place appropriate documentation:

The ICO acknowledges that assigning controller / processor roles in respect to AI systems can be inherently complex, given the number of actors involved in the subsequent processing of personal data via the AI system. In this respect, the ICO draws attention to its work on data protection and cloud computing, with revisions to the ICO's Cloud Computing Guidance expected in 2021.

The ICO outlines a number of examples in which organisations take the role of controller / processor with respect to AI systems. The ICO is planning to consult on each of these controller and processor scenarios in the Cloud Computing Guidance review, so organisations can expect further clarity in 2021.

(c) Businesses should put in place documentation for accountability purposes to identify any "trade-offs" when assessing AI-related risks:

The ICO notes that there is a number of "trade-offs" when assessing different AI-related risks. Some common examples of such trade-offs are included in the guidance itself, such as where an organisation wishes to train an AI system capable of producing accurate statistical output on one hand, versus the data minimisation concerns associated with the quantity of personal data required to train such an AI system on the other.

The guidance provides advice to businesses seeking to manage risk associated with such trade-offs. The ICO recommends to put in place effective and accurate documenting processes for accountability purposes, but also for businesses to consider specific instances such as: (i) where an organisation acquires an AI solution and whether the associated trade-offs formed part of the organisation's due diligence processes, (ii) social acceptability concerns associated with certain trade-offs, and (iii) whether mathematical approaches can mitigate trade-off associated privacy risk.

2. ENSURING LAWFULNESS, FAIRNESS AND TRANSPARENCY IN AI SYSTEMS

The second section of the guidance focuses on ensuring lawfulness, fairness and transparency in AI systems and covers three main areas:

(a) Businesses should identify the purpose and an appropriate lawful basis for each processing operation in an AI system:

The guidance makes it clear that organisations must identify the purpose and an appropriate lawful basis for each processing operation in an AI system and specify these in their privacy notice.

It adds that it might be more appropriate to choose different lawful bases for the development and deployment phases of an AI system. For example, while performance of a contract might be an appropriate ground for processing personal data to deploy an AI system (e.g. to provide a quote to a customer before entering into a contract), it is unlikely that relying on this basis would be appropriate to develop an AI system.

The guidance makes it clear that legitimate interests provide the most flexible lawful basis for processing. However, if businesses rely on it, they are taking on an additional responsibility for considering and protecting people's rights and interests and must be able to demonstrate the necessity and proportionality of the processing through a legitimate interests assessment.

The guidance mentions that consent may be an appropriate lawful basis but individuals must have a genuine choice and be able to withdraw the consent as easily as they give it.

It might also be possible to rely on legal obligation as a lawful basis for auditing and testing the AI system if businesses are able to identify the specific legal obligation they are subject to (e.g. under the Equality Act 2010). However, it is unlikely to be appropriate for other uses of that data.

If the AI system processes special category or criminal convictions data, then the organisation will also need to ensure compliance with additional requirements in the GDPR and the Data Protection Act 2018.

(b) Businesses should assess the effectiveness of the AI system in making statistically accurate predictions about individuals:

The guidance notes that organisations should assess the merits of using a particular AI system in light of its effectiveness in making statically accurate and therefore valuable predications. In particular, organisations should monitor the system's precision and sensitivity. Organisations should also prioritise avoiding certain kind of errors based on the severity and nature of the particular risk.

Businesses should agree regular updates (retraining of the AI system) and reviews of statistical accuracy to guard against changing data, for example, if the data originally used to train the AI system is no longer reflective of the current users of the AI systems.

(c) Businesses should address the risks of bias and discrimination in using an AI system:

AI systems may learn from data which may be imbalanced (e.g. because the proportion of different genders in the training data is different than in the population using the AI system) and / or reflect past discrimination (e.g. if in the past, male candidates were invited more often to job interviews) which could lead to producing outputs which have discriminatory effect on individuals. The guidance makes it clear that obligations relating to discrimination under data protection law is separate and additional to organisations' obligations under the Equality Act 2010.

The guidance mentions various approaches developed by computer scientists studying algorithmic fairness which aim to mitigate AI-driven discrimination. For example, in cases of imbalanced training data, it may be possible to balance it out by adding or removing data about under/over-represented subsets of the population. In cases where the training data reflects past discrimination, the data may be manually modified, the learning process could be adapted to reflect this, or the model can be modified after training. However, the guidance warns that in some cases, simply retraining the AI model with a more diverse training set may not be sufficient to mitigate its discriminatory impact and additional steps might need to be taken.

The guidance recommends that businesses put in place policies and good practices to address risks related to bias and discrimination and undertake robust testing of the AI system on an ongoing basis against selected key performance metrics.

3. SECURITY ASSESSMENT AND DATA MINIMISATION IN AI SYSTEMS

The third section of the guidance is aimed at technical specialists and covers two main issues:

(a) Businesses should assess the security risks AI introduces and take steps to manage the risks of privacy attacks on AI systems:

AI systems introduce new kinds of complexity not found in more traditional IT systems. AI systems might also rely heavily on third party code and are often integrated with several other existing IT components. This complexity might make it more difficult to identify and manage security risks. As a result, businesses should ensure that they actively monitor and take into account the state-of-the-art security practices when using personal data in an AI context. Businesses should use these practices to assess AI systems for security risks and ensure that their staff have appropriate skills and knowledge to address these security risks. Businesses should also ensure that their procurement process includes sufficient information sharing between the parties to perform these assessments.

The guidance warns against two kinds of privacy attacks which allow the attacker to infer personal data of the individuals used to train the AI system:

The guidance then suggests some practical technical steps that businesses can take to manage the risks of such privacy attacks.

The guidance also warns against novel risks, such as adversarial examples which allow attackers to feed modified inputs into an AI model that will be misclassified by the AI system. The ICO notes that in some cases this could lead to a risk to the rights and freedom of individuals (e.g. if a facial recognition system is tricked to misclassify an individual for someone else). This would raise issues not only under data protection laws but possibly also under the Network and Information Systems (NIS) Directive.

(b) Business should take steps to minimise personal data when using AI systems and adopt appropriate privacy-enhancing methods:

AI systems generally require large amounts of data but the GDPR data minimisation principle requires business to identify the minimum amount of personal data they need to fulfil their purposes. This can create some tensions but the guidance suggests steps businesses can take to ensure that the personal data used by the AI system is "adequate, relevant and limited".

The guidance recommends that individuals accountable for the risk management and compliance of AI systems are familiar with techniques such as: perturbation (i.e. adding 'noise' to data), using synthetic data, adopting federated learning, using less "human readable" formats, making inferences locally rather than on a central server, using privacy-preserving query approaches, and considering anonymisation and pseudonymisation of the personal data. The guidance goes into some detail for each of these techniques and explains when they might be appropriate.

Importantly, ensuring security and data minimisation in AI systems is not a static process. The ICO suggests that compliance with data protection obligations requires ongoing monitoring of trends and developments in this area and being familiar with and adopting the latest security and privacy-enhancing techniques for AI systems. As a result, any contractual documentation that businesses put in place with service providers should take these privacy concerns into account.

4. INDIVIDUAL RIGHTS IN AI SYSTEMS

The final section of the guidance is aimed at compliance specialists and covers two main areas:

(a) Businesses must comply with individual rights requests in relation to personal data in all stages of the AI lifecycle, including training data, deployment data and data in the model itself:

Under the GDPR, individuals have a number of rights relating to their personal data. The guidance states that these rights apply wherever personal data is used at any of the various stages of the AI lifecycle from training the AI model to deployment.

The guidance is clear that even if the personal data is converted into a form that makes the data potentially much harder to link to a particular individual, this is not necessarily considered sufficient to take the data out of scope of the data protection law because the bar for anonymisation of personal data under the GDPR is high.

If it possible for an organisation to identify an individual in the data, directly or indirectly (e.g. by combining it with other data held by the organisation or other data provided by the individual), the organisation must respond to requests from individuals to exercise their rights under the GDPR (assuming that the organisation has taken reasonable measures to verify their identity and no other exceptions apply). The guidance recognises that the use of personal data with AI may sometimes make it harder to fulfil individual rights but warns that just because it may be harder to fulfil the GDPR obligations in the context of AI, they should not be regarded as manifestly unfounded or excessive. The guidance also provides further detail about how business should comply with specific individual rights requests in the context of AI.

(b) Businesses should consider the requirements necessary to support a meaningful human review of any decisions made by, or with the support of, AI using personal data:

There are specific provisions in the GDPR (particularly Article 22 GDPR) covering individuals' rights where processing involves solely automated individual decision-making, including profiling, with legal or similarly significant effects. Businesses that use such decision-making must tell individuals whose data they are processing that they are doing so for automated decision-making and give them "meaningful information about the logic involved, as well as the significance and the envisaged consequences" of the processing. The ICO and the European Data Protection Board have both previously published detailed guidance on the obligations concerning automated individual decision-making which can be of further assistance.

The GDPR requires businesses to implement suitable safeguards, such as right to obtain human intervention, express their point of view, contest the decision or obtain an explanation about the logic of such decision. The guidance mentions two particular reasons why AI decisions might be overturned: (i) if the individual is an outlier and their circumstances are substantially different from those considered in the training data, and (ii) if the assumptions in the AI model can be challenged, e.g. because of specific design choices. Therefore, businesses should consider the requirements necessary to support a meaningful human review of any solely automated decision-making process (including the interpretability requirements, training of staff and giving them appropriate authority). The guidance from the ICO and The Alan Turning Institute on Explaining decision made with AI considers this issue in further detail (for more information on that guidance, please see our alert here).

In contrast, decisions that are not fully automated but for which the AI system provides support to a human decision-maker do not fall within the scope of Article 22 GDPR. However, the guidance is clear that a decision does not fall outside of the scope of Article 22 just because a human has "rubber-stamped" it and the human decision-maker must have a meaningful role in the decision-making process to take the decision-support tool outside the scope of Article 22.

The guidance also warns that to have a meaningful human oversight also means that businesses need to address the risks of automation bias by human reviewers (i.e. relying on the output generated by the decision-support system and not using their own judgment) and the risks of lack of interpretability (i.e. outputs from AI systems that are difficult for a human reviewer to interpret / understand, for example, in deep-learning AI models). The guidance provides some suggestions how such risks might be addressed, including by considering these risks when designing / procuring the AI systems, by training staff and by effectively monitoring the AI system and the human reviewers.

Conclusion

This guidance from the ICO is another welcome step for the rising number of businesses that use AI systems in their day-to-day operations. It also provides more clarity on how businesses should interpret their data protection obligations as they apply to AI. This is especially important because this area of compliance is attracting the focus of different regulators.

The ICO mentions "monitoring intrusive and disruptive technology" as one of its three focus areas and AI as one of its priorities for its regulatory approach during the COVID-19 pandemic and beyond. As a result, the ICO is also running a free webinar series in autumn 2020 on various topics covered in the guidance to help businesses achieve data protection compliance when using AI systems. The ICO stated on the AI Accountability and Governance webinar on 22 September 2020 that it is currently developing its AI auditing capabilities so it can use its powers to conduct audits of AI systems in the future. However, the ICO staff on the webinar confirmed the ICO would take into account the effect of the COVID-19 pandemic before conducting any AI audits.

Other regulators have also been interested in the implications of AI. For example, the Financial Conduct Authority is working with The Alan Turing Institute on AI transparency in financial markets. Businesses should therefore follow the guidance from their respective regulators and put in place a strategy how to address the data protection (and other) risks associated with using AI systems.

Read the original here:
UK Information Commissioner's Office publishes guidance on artificial intelligence and data protection - Lexology

Related Posts
This entry was posted in $1$s. Bookmark the permalink.