Rules urgently needed to oversee police use of data and AI report – The Guardian

National guidance is urgently needed to oversee the polices use of data-driven technology amid concerns that it could lead to discrimination, a report has said.

The study, published by the Royal United Services Institute (Rusi) on Sunday, said guidelines were required to ensure that the use of data analytics, artificial intelligence (AI) and computer algorithms developed legally and ethically.

Forces expanding use of digital technology to tackle crime was in part driven by funding cuts, the report said.

Officers are battling against information overload as the volume of data around their work grows, while there is also a perceived need to take a preventative rather than reactive stance to policing.

Such pressures have led forces to develop tools to forecast demand in control centres, triage investigations according to their solvability and to assess the risks posed by known offenders.

Examples of the latter include Hampshire polices domestic violence risk-forecasting model, Durham polices Harm Assessment Risk Tool (Hart) and West Midlands polices draft integrated offender management model.

The report, commissioned by the Centre for Data Ethics and Innovation (CDEI), said that while technology could help improve police effectiveness and efficiency, it was held back by the lack of a robust empirical evidence base, poor data quality and insufficient skills and expertise.

While not directly focused on biometric, live facial recognition or digital forensic technologies, the report explored general issues of data protection and human rights underlying all types of police technology.

It could be argued that the use of such tools would not be necessary if the police force had the resources needed to deploy a non-technological solution to the problem at hand, which may be less intrusive in terms of its use of personal data, the report said.

It advised that an integrated impact assessment was needed to help justify the need for each new police analytics project. Initiatives were often not underpinned by enough evidence as to their claimed benefits, scientific validity or cost-effectiveness, the report said.

The reports authors noted criticism of predictive policing tools being racially biased, but said there was a lack of sufficient evidence to assess whether this occured in England and Wales and if it resulted in unlawful discrimination.

They said studies claiming to demonstrate racial bias were mostly based on analysis conducted in the US, and that it was unclear whether such concerns would transfer to a UK context.

However, there is a legitimate concern that the use of algorithms may replicate or amplify the disparities inherent in police-recorded data, potentially leading to discriminatory outcomes, the report said.

For this reason, ongoing tracking of discrimination risk is needed at all stages of a police data analytics project, from problem formulation and tool design to testing and operational deployment.

Roger Taylor, chairman of the CDEI, said: There are significant opportunities to create better, safer and fairer services for society through AI, and we see this potential in policing. But new national guidelines, as suggested by Rusi, are crucial to ensure police forces have the confidence to innovate legally and ethically.

The report called on the National Police Chiefs Council (NPCC) to work with the Home Office and College of Policing to develop the technology guidelines.

The NPCC lead for information management, Ian Dyson, said it would work with the government and regulators to consider the reports recommendations. He added: Data-driven technology can help us to to keep the public safe. Police chiefs recognise the need for guidelines to ensure legal and ethical development of new technologies and to build confidence in their ongoing use.

Read more:

Rules urgently needed to oversee police use of data and AI report - The Guardian

Related Posts

Comments are closed.