Artificial intelligence and transparency in the public sector – Lexology

The Centre for Data Ethics and Innovation has published its review into bias in algorithmic decision-making; how to use algorithms to promote fairness, not undermine it. We wrote recently about the report's observations on good governance of AI. Here, we look at the report's recommendations around transparency of artificial intelligence and algorithmic decision-making used in the public sector (we use AI here as shorthand).

The need for transparency

The public sector makes decisions which can have significant impacts on private citizens, for example related to individual liberty or entitlement to essential public services. The report notes that there is increasing recognition of the opportunities offered through the use of data and AI in decision-making. Whether those decisions are made using AI or not, transparency continues to be important to ensure that:

However, the report identifies, in our view, three particular difficulties when trying to apply transparency to public sector use of AI.

First, the risks are different. As the report explains at length there is a risk of bias when using AI. For example, where groups of people within a subgroup is small, data used to make generalisations can result in disproportionately high error rates amongst minority groups. In many applications of predictive technologies, false positives may have limited impact on the individual. However in particularly sensitive areas, false negatives and positives both carry significant consequences, and biases may mean certain people are more likely to experience these negative effects. The risk of using AI can be particularly great for decisions made by public bodies given the significant impacts they can have on individuals and groups.

Second, the CDEI's interviews found that it is difficult to map how widespread algorithmic decision-making is in local government. Without transparency requirements it is more difficult to see when AI is used in the public sector which risks suggested intended opacity (see our previous article on widespread use by local councils of algorithmic decision-making here), how the risks are managed, or to understand how decisions are made.

Third, there are already several transparency requirements on the public sector (think publications of public sector internal decision-making guidance, or equality impact assessments) but public bodies may find it unclear how some of these should be applied in the context of AI (data protection is a notable exception given guidance by the Information Commissioner's Office).

What is transparency?

What transparency means depends on the context. Transparency doesnt necessarily mean publishing algorithms in their entirety. That is unlikely to improve understanding or trust in how they are used. And the report recognises that some citizens may make, rightly or wrongly, decisions based on what they believe the published algorithms means.

The report sets out useful requirements to bear in mind when considering what type of transparency is desirable:

Recommendation - transparency obligation

In order to give clarity to what is meant by transparency, and to improve it, the report recommends:

Government should place a mandatory transparency obligation on all public sector organisations using algorithms that have a significant influence [by affecting the outcome in a meaningful way] on significant decisions [i.e. that have a direct impact, most likely one that has an adverse legal impact or significantly affects] affecting individuals. Government should conduct a project to scope this obligation more precisely, and to pilot an approach to implement it, but it should require the proactive publication of information on how the decision to use an algorithm was made, the type of algorithm, how it is used in the overall decision-making process, and steps taken to ensure fair treatment of individuals.

Some exceptions will be required, such as where transparency risks compromising outcomes, intellectual property, or for security & defence.

Further clarifications to the obligation, such as the meaning of "significant decisions" will also be required. As a starting point, though, the report anticipates a mandatory transparency publication to include:

The report expects that identifying the right level of information on the AI is the most novel aspect. CDEI expect that other examples of transparency may be a useful reference, including the Government of Canadas Algorithmic Impact Assessment, a questionnaire designed to help organisations assess and mitigate the risks associated with deploying an automated decision system (and which we referred to in a recent post about global perspectives on regulating for algorithmic accountability).

A public register?

Falling short of an official recommendation, the CDEI also notes that the House of Lords Science and Technology Select Committee and the Law Society have both recently recommended that parts of the public sector should maintain a register of algorithms in development or use (these echo calls from others for such a register as part of a discussion on the UK's National Data Strategy). However, the report notes the complexity in achieving such a register and therefore concludes that "the starting point here is to set an overall transparency obligation, and for the government to decide on the best way to coordinate this as it considers implementation" with a potential register to be piloted in a specific part of the public sector.

Government is increasingly automating itself with the use of data and new technology tools, including AI. Evidence shows that the human rights of the poorest and most vulnerable are especially at risk in such contexts. A major issue with the development of new technologies by the UK government is a lack of transparency. The UN Special Rapporteur on Extreme Poverty and Human Rights, Philip Alston.

https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/939109/CDEI_review_into_bias_in_algorithmic_decision-making.pdf

More:
Artificial intelligence and transparency in the public sector - Lexology

Related Posts
This entry was posted in $1$s. Bookmark the permalink.