What role for regulators in the developing a creditable AI audit industry? – Lexology

Posted: May 31, 2022 at 2:44 am

AI audits are used to check and verify that algorithmic systems are meeting regulatory expectations and not producing harms (either unintended or intended). Globally the regulatory requirements for AI audits are rapidly increasing:

These audit requirement raises many questions: who should these AI auditors be? What training and qualifications should they have? what standards should algorithmic systems be assessed against? What role should audit play in the context of demonstrating compliance?

A discussion paper has recently been published canvassing views on the potential roles regulators could have in the development of an AI audit industry, published by the group of the 4 UK regulators with a stake in the digital economy the telecoms regulator Ofcom, the competition regulator the CMA, the privacy regulator the ICO and the financial regulator the FCA (collectively the Digital Regulation Co-operation Forum or DRCF)

So why do regulators need to be involved if the market is starting to deliver?

The DRCF says that regulators have an interest in establishing trust in the audit market, so that organisations and people can be sure that audits have credibility. Voluntary standards have an important role, but the DRCF also said that 'there are often pull factors for companies to comply, such as technical standards translating regulatory requirements into product or process design.

The discussion paper noted recent positive developments in AI auditing tools:

While this nascent audit ecosystem provides a promising foundation, the DCRF expressed concerned that this risked becoming a wild west patchwork where entrants were able to enter the market for algorithm auditing without any assurance of quality.

Why AI auditing is not a tick a box exercise

While AI auditing can draw on the general world of audit, the DRCF points out that AI auditing has its own unique challenges:

Know your types of AI audits

The DRCF says that the starting point to building a credible AI audit industry is to codify the different audit tools, as set out below:

A governance audit could review the organisations content moderation policy, including its definition of hate speech and whether this aligns with relevant legal definitions.The audit could assess whether there is appropriate human oversight and determine whether the risk of system error is appropriately managed through human review. An empirical audit could involve a sock puppet approach where auditors create simulated users and input certain classifications of harmful, harmless or ambiguous content and assess whether the system outputs align with what would be expected in order to remain compliant. A technical audit could review the data on which the model has been trained, the optimisation criteria used to train the algorithm and relevant performance metrics.The discussion paper provides the following example of how the three different types of audits might fit together in addressing whether an AI program effectively addressed the risks of hate speech:

The risks of the Big Four auditing Big Tech

While the DRCF supports the professionalisation of AI audit, it also notes concerns that AI audit may settle into a comfortable captive relationship between the big four accounting firms and the big global technology firms.

The discussion paper canvasses proposals to facilitate better audits through introducing specific algorithmic access obligations; in effect, by arming academics and civil society groups to undertake their own audits of AI used by business. The discussion paper said that [p]roviding greater access obligations for research or public interest purposes and/or by certified bodies could lessen current information asymmetries, improve public trust, and lead to more effective enforcement

But the discussion paper also acknowledged that it would be important to carefully consider the costs and benefits of any mandated access to organisations systems, and canvassed three approaches:

The discussion paper also canvassed approaches which, in effect, crowd sourced AI auditing:

The public may also benefit from a way of reporting suspected harms from algorithmic systems, alongside the journalists, academics and civil society actors that already make their concerns known. This reporting could include an incident reporting database that would allow regulators to prioritise audits. It could also comprise some form of popular petition or super complaint mechanism through which the public could trigger a review by a regulator, subject to sensible constraints.

The risk of AI audits that lead nowhere

Audits are only of benefit if there is a broader governance system which can take up the problems discovered by an audit of an AI system and retool the AI system to solve the problem.

The discussion paper canvasses enhanced powers for regulators:

The discussion paper also canvasses self-help remedies for consumers. It notes that, unlike in other areas such as privacy, individuals harmed by poorly performing AI do not necessarily have remedies:

Auditing can indicate to individuals that they have been harmed, for example from a biased CV screening algorithm. It can provide them with evidence that they could use to seek redress. However, there is an apparent lack of clear mechanisms for the public or civil society to challenge outputs or decisions made with algorithms or to seek redress.

So, what specific roles for regulators?

Given the above problems in growing a credible AI audit market, the discussion paper seeks views on 6 hypotheses on the appropriate roles for regulators:

Read more from the original source:

What role for regulators in the developing a creditable AI audit industry? - Lexology

Related Posts