EU struggles to go from talk to action on artificial intelligence – Science Business

The EU is moving tentatively towards first-of-its-kind rules on the ways that companies can use artificial intelligence (AI), amid fears that the technology is galloping beyond regulators grasp.

Supporters of regulation say proper human oversight is needed for a rapidly developing technology that presents new risks to individual privacy and livelihoods. Others warn that the new rules could stifle innovation with lasting economic consequences.

We arent Big Brother China or Big Data US. We have to find our own way, said German MEP Axel Voss, who is about to take his seat on the European Parliaments new special committee on AI.

Having in mind that the AI tech is now of global strategic relevance, we have to be careful about over-regulating. Theres competition around the world. If we would like to play a role in the future, we need to do something thats not going to the extreme, said Voss, a member of the centre-right European People's Party.

In February, the European Commission presented its AI white paper, which states that new technologies in critical sectors should be subject to legislation. It likened the current situation to "the Wild West" and said it would focus on "high-risk" cases. The debate over the papers many elements will last through 2020 and into next year, when the EU executive will present its legislative proposal.

Researchers and industry are battling for influence over the AI policy.

Theres an incredible opportunity here to begin to tackle high-risk applications of AI. Theres also this real chance to set standards for the entire world, said Haydn Belfield, research associate and academic project manager at Cambridge Universitys Centre for the Study of Existential Risk.

Policymakers and the public are concerned about applications such as autonomous weapons and government social scoring systems similar to those under development in China. Facial scanning software is already creeping into use Europe, operating with little oversight.

You dont have to be an expert in AI to see theres a really high risk to peoples life and liberty from some of these new applications, said Belfield.

Big tech companies, which have made large investments in new AI applications, are wary of the EUs plans to regulate.

Google has criticised measures in the commission's AI white paper, which it says could harm the sector. Last year, the comoany issued its own guidance on the technology, arguing that although it comes with hazards, existing rules and self-regulationwill be sufficientin the vast majority of instances.

In its response to the commissions proposal, Microsoft similarly urged the EU to rely on existing laws and regulatory frameworks as much as possible. However, the US tech company added that developers should be transparent about limitations and risks inherent in the use of any AI system. If this is not done voluntarily, it should be mandated by law, at least for high-risk use cases.

Thomas Metzinger, professor of theoretical philosophy at the University of Mainz, and a member of the commission's 52-strong AI expert group says hes close to despondency because of how long its taking to regulate the field.

We can have clever discussions but what is actually being done? I have long given up on having an overview of the 160 or so ethics guidelines for AI out there in the world, he said.

Vague and non-committal guidelines

Metzinger has been strongly critical of the make-up of the commissions AI advisory group, which he says is tilted towards industry interests. Im disappointed by what weve produced. The guidelines are completely vague and non-committal. But its all relative. Compared to what China and US have produced, Europe has done better, he said.

Setting clear limits for AI is in step with Brussels more hands-on approach of recent years for the digital world. The commission is also setting red lines on privacy, antitrust and harmful internet content, which has inspired tougher rules elsewhere in the world.

Some argue that this prioritising of data protection, through the EUs flagship general data protection regulation (GDPR), has harmed AI growth in Europe.

The US and China account for almost all private AI investment in the world, according to Stanford Universitys AI index report. The European country with the most meaningful presence on AI is the UK, which has left the bloc and has hinted that it may detach itself from EU data protection laws in the future.

GDPR has slowed down AI development in Europe and potentially harmed it, says Sennay Ghebreab, associate professor of socially intelligent systems at the University of Amsterdam.

If you look at medical applications of AI, doctors are not able to use this technology yet [to the fullest]. This is an opportunity missed, he said. The dominating topics are ethics and privacy and this could lead us away from discussing the benefits that AI can bring.

GDPR is a very good piece of legislation, said Voss. But he agrees that it hasnt found the best balance between innovation and privacy. Because of its complexity, people are sometimes giving up, saying its easier to go abroad. We are finding our own way on digitisation in Europe but we shouldnt put up more bureaucratic obstacles.

Catching up

Those who support AI legislation are concerned it will take too long to regulate the sectors where it is deployed.

One highly-decorated legal expert told me it would be around nine years before a law was enforceable. Can you imagine where Google DeepMind will be in five years? said Metzinger, referring to the London lab owned by Google that is at the forefront of bringing AI to sectors like healthcare.

MEPs too are mindful of the need for speed, said Voss. Its very clear that we cant take the time we took with the GDPR. We wont catch up with the competition if it takes such a long time, he said. From the initial consultation, to implementation, GDPR took the best part of a decade to put together.

Regulation could be a fake, misleading solution, Ghebreab warned. Its the companies that use AI, rather than the technology itself, that need to be regulated. In general, top-down regulation is unlikely to lead to community-minded AI solutions. AI is in hands of big companies in US, in the hands of the government in China, and it should be in the hands of the people in Europe, Ghebreab said.

Ghebreab has been working on AI since the 1990s and has recently started a lab exploring socially minded applications, with backing from the city of Amsterdam.

As an example of how AI can help people, he points to an algorithm developed by the Swiss government and a team of researchers in the US that helps with the relocation of refugees. It aims to match refugees with regions that need their skills. Relocation today is based on capacity rather than taking into account refugees education or background, he said.

Interim solutions for AI oversight are not to everyones taste.

Self-regulation is fake and full of superficial promises that are hard to implement, said Metzinger.

The number one lesson Ive learned in Brussels is how contaminated the whole process is by industrial lobbying. Theres a lot of ethics-washing that is slowing down the path to regulation, he said.

Metzinger is aggrieved that, of the 52 experts picked to advise the commission on AI, only four were ethicists. Twenty-six are direct industry representatives, he said. There were conflicts, and people including myself did not sign off on all our work packages. Workshops organised with industry lacked transparency, said Metzinger.

In response, commission spokesman Charles Manoury said the expert panel was formed on the basis of an open selection process, following anopen call for expressions of interest.

Digital Europe, which represents tech companies such as Huawei, Google, Facebook and Amazon, was also contacted for comment.

Adhering to AI standards is ultimately in companies interests, argues Belfield. After the techlash weve been seeing, it will help to make companies seem more trustworthy again, he said.

Developing trustworthy AI is where the EU can find its niche, according to a recent report from the Carnegie Endowment for International Peace. Designed to alleviate potential harm as well as to permit accountability and oversight, this vision for AI-enabled technologies could set Europe apart from its global competitors, the report says.

The idea has particular thrust in France, where the government, alongside Canada, pushed for the creation of the new global forum on ethical AI development.

Public distrust is the fundamental brake on AI development, according to the UK governments Centre for Data Ethics and Innovation. In the absence of trust, consumers are unlikely to use new technologies or share the data needed to build them, while industry will be unwilling to engage in new innovation programmes for fear of meeting opposition and experiencing reputational damage, its AI Barometer report says.

Banning AI

One idea floated by the commission earlier this year was a temporary ban on the use of facial recognition in public areas for up to five years.

There are grave concerns about the technology, which uses surveillance cameras, computer vision, and predictive imaging to keep tabs on large groups of people.

Facial recognition is a genius technology for finding missing children but a heinous technology for profiling, propagating racism, or violating privacy, said Oren Etzioni, professor of computer science and CEO of the Allen Institute for Artificial Intelligence in Seattle.

Several state and local governments in the US have stopped law enforcement officers from using facial recognition databases. Trials of the technology in Europe have provoked a public backlash.

Privacy activists argue the technology is potentially authoritarian, because it captures images without consent. The technology can also have a racial bias. If a system is trained primarily on white male faces, but fewer women and people of colour, it will be less accurate for the latter groups.

Despite its flaws, facial recognition has potential for good, said Ghebreab, who doesnt support a moratorium. We have to be able to show how people can benefit from it; now the narrative is how people suffer from it, he said.

Voss doesnt back a ban for particular AI applications either. We should have some points in the law saying what you can and cant do with AI, otherwise youll face a ban. We should not think about an [outright] ban, he said.

Metzinger favours limiting facial recognition in some contexts, but he admits, its very difficult to tease this apart. You would still want to be able, for counter terrorism measures, to use the technology in public spaces, he said.

The Chinese government has controversially used the tool to identify pro-democracy protesters in Hong Kong, and for racial profiling and control of Uighur muslims. Face scans in China are used to pick out and fine jaywalkers and citizens in Shanghai will soon have to verify their identity in pharmacies by scanning their faces.

It comes back to whom you trust with your data, Metzinger said. I would basically still trust the German government I would never want to be in the hands of the Hungarian government though.

Defence is the other big, controversial area for AI applications. The EUs white paper mentions military AI just once, in a footnote.

Some would prefer if the EU banned the development of lethal autonomous weapons altogether, though few expect this to happen.

There is a lot we dont know. A lot is classified. But you can deduce from investment levels that theres much less happening in Europe [on military AI] than in the US and China, said Amy Ertan, cyber security researcher at the University of London.

Europe is not a player in military AI but it is making steps to change this. The European Defence Agency is running 30 projects that include AI aspects, with more in planning, said the agencys spokeswoman Elisabeth Schoeffmann.

The case for regulation

Author and programmer Brian Christian says regulating AI is a cat and mouse game.

It reminds me of financial regulation, which is very difficult to write because the techniques change so quickly. By the time you pass the law, the field has moved on, he said.

Christians new book looks at the urgent alignment problem, where AI systems dont do what we want or what we expect. A string of jaw-dropping breakthroughs have alternated with equally jaw-dropping disasters, he said.

Recent examples include Amazons AI-powered recruiting system, which filtered out applications that included womens colleges, and showed preference for CVs that included linguistic habits more prone to men, like use of the words executed and captured, said Christian. After several repairs failed, engineers quietly scuttled it entirely in 2018.

Then there was the recurring issue with Google Photos labelling pictures of black people as gorillas; after a series of fixes didnt work, engineers resorted to manually deleting the gorilla label altogether.

Stories like these illustrate why discussions on ethical responsibility have only grown more urgent, Christian said.

If you went to one of the major AI conferences, ethics and safety are now the most rapidly growing and dynamic subsets of the field. Thats either reassuring or worrying, depending on how you view these things.

Europes data privacy rules have helped ethics and safety move in from the fringes of AI, said Christian. One of the big questions for AI is transparency and explain-ability, he said. The GDPR introduces a right to know why an AI system denied you a mortgage or a credit card, for example.

The problem however is that AI decisions are not always intelligible to those who create these systems, let alone to ordinary people.

I heard about lawyers at AI companies who were complaining about the GDPR and how it demanded something that wasnt scientifically possible. Lawyers pleaded with regulators. The EU gave them two years notice on a major research problem, Christian said.

Were familiar with the idea that regulation can constrain, but here is a case where a lot of our interest in transparency and explanation was driven by a legal requirement no one knew how to meet.

More here:
EU struggles to go from talk to action on artificial intelligence - Science Business

Related Posts
This entry was posted in $1$s. Bookmark the permalink.