The EU’s Ambitious AI Regulations: Increasing Trust or Stifling Progress? – ClearanceJobs

Posted: May 4, 2021 at 8:10 pm

European Union (EU) officials have proposed new rules that could restrict and even ban some uses of artificial intelligence (AI) within its borders. That could include some technology developed by U.S. and Chinese-based tech giants. The rules would be the most significant international effort to regulate the use of AI to date.

The Coordinated Plan on Artificial Intelligence 2021 Review, put forth by the 27-nation bloc, could set a new standard for technology regulation.

If passed, rules could impact how facial recognition, autonomous vehicles, and even algorithms that are employed in online advertising are used across the EU. It could also limit the use of AI and machine learning as it applies to automated hiring, school applications and credit scores. It would ban AI outright in situations deemed risky, including government social scoring systems where individuals are judged on their behavior.

This could be the first-ever legal framework on AI, and the EU has said the new Coordinated Plan with Member States would guarantee the safety and fundamental rights of people and business, while also strengthening AI uptake, investment and innovation across the EU.

With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted, Margrethe Vestager, the European Commissions executive vice president for the digital age, said in a statement. By setting the standards, we can pave the way for to ethical technology worldwide and ensure that the EU remains competitive along the way.

The EU has maintained that the new AI regulations could ensure that the Europeans can trust what AI has to offer, and would create flexible rules that could address the specific risks posed by AI-based systems. AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned.

High-risk use of AI would include critical infrastructure, including transport, which could put the lives and health of citizens at risk; educational and vocational training, such as the scoring of exams; and law enforcement where it could interfere with peoples fundamental rights. In those cases, the high-risk AI systems would be subject to strict obligations before they could be put on the market, and would require logging of activity to ensure traceability of results, high quality of datasets, high level of security and accuracy, and appropriate human oversight measures to minimize risk.

AI is a means, not an end, explained EU Commissioner for Internal Market Thierry Breton.

It has been around for decades but has reached new capacities fueled by computing power, added Breton. This offers immense potential in areas as diverse as health, transport, energy, agriculture, tourism or cyber security. It also presents a number of risks. Todays proposals aim to strengthen Europes position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.

This isnt the first time the EU has attempted to create regulation around new technology that surpasses anything else in the world. This has mainly been focused on privacy, including search results, but also in how personal information can be used by tech firms.

Now it addresses the developing technologies of AI and machine learning, but the question is whether such a hard line could limit the efforts by the tech giants. Or is this the best course of action to ensure privacy, security and to maintain a fair and level playing field for all involved?

First and foremost, allowing technologies to be developed unilaterally without any oversight is an effective vote for market dominating behavior, technology industry analyst Charles King of Pund-IT, told ClearanceJobs.

Weve seen it happen among tech industry behemoths, including Facebook and Google, and in countries, like China and government agencies in the U.S. and elsewhere, King added.

The EU hold all the cards right now, and the tech firms will have to play by their rules or ignore the European market. The latter really isnt an option.

Along with trying to place some restraints on potentially damaging behavior, the EU is also acting from a position of historic strength, explained King. The organization has aggressivelypursued businesses that it believes are acting against the interests of consumers and markets, and has passed regulations including GDPR that have successfully influenced global businessesand markets.

One of the biggest concerns would be whether such strict regulations simply make it too hard for businesses to play by the EUs rules. AI could be one of those areas where this could seem like stifling but in the end could ensure that control over the technology is, in fact maintained accordingly.

The writer and futurist Arthur Clarke famously asserted, Any sufficiently advanced technology is indistinguishable from magic,' said Jim Purtilo, associate professor of computer science at the University of Maryland. If thats the definition, then todays decision systems based on artificial intelligence technologies surely qualify as magic. They are subtle, tremendously complex methods which defy full explanation as to how a given result was computed.

Part of the issue is in understanding exactly what is entailed by AI. While it is easy to think of the science-fiction version of a thinking computer or android, in reality it is just ever-more complicated algorithms.

There has always been some aura of mystery to it. In some sense, AI is the area that stops being AI once we understand it, added Purtilo.

Many fairly ordinary forms of computing for example logic systems and computer memory once fell under a heading of AI research, he noted. What differs today is the scale of decisions that people will leave to machines. It used to be that at least programmers had an understanding of how their programs computed a result, but with AI the programmers generally cant work back to justify how an outcome was reached. That programs accuracy not a big deal when all it is doing is tagging photos of your friends in an album on your phone, but it becomes a very big deal for individuals who fall under suspicion of police based on wide deployment of facial recognition technology.

Given the understanding of what is, and to some extent what is not, AI, the question then becomes whether the EU is taking its regulation of AI too far?

Im not particularly afraid of computer programs, but Im terrified of the bureaucrats who use them thinking they escape responsibility by pretending to be mere servants of science which, by virtue of AI methods, is somehow settled even if not explained, warned Purtilo.

I thus see the EUs move as being less about AI than it is about policy, Purtilo told ClearanceJobs. Government practices should offer transparency and accountability, but as AI methods offer neither, these proposed regulations represent a first attempt to push back at opaque technologies that cloak the basis for impactful decisions.

That could stifle AIs development, or perhaps simply allow it to be better controlled and managed.

Whether or not this latest effort will succeed is anyones guess, said Pund-ITs King. But the EU is taking action because it believes it should and because it can.

See the original post:

The EU's Ambitious AI Regulations: Increasing Trust or Stifling Progress? - ClearanceJobs

Related Posts