Urgent action needed over artificial intelligence risks to human rights – UN News

Posted: September 16, 2021 at 5:48 am

Urgent action is needed as it can take time to assess and address the serious risks this technology poses to human rights, warnedtheHigh Commissioner:The higher the risk for human rights, the stricter the legal requirements for the use of AI technology should be.

Ms. Bachelet also called for AI applications that cannot be used in compliance with international human rights law,to be banned. Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they areused without sufficient regard to how they affect peoples human rights.

On Tuesday, the UN rights chiefexpressed concern about the "unprecedented level of surveillance across the globe by state and private actors", which she insisted was "incompatible" with human rights.

She wasspeakingat a Council of Europe hearing on the implications stemming fromJulyscontroversy over Pegasus spyware.

The Pegasus revelations were no surprise to many people, Ms. Bachelet told the Council of Europe's Committee on Legal Affairs and Human Rights, in reference to the widespread use of spyware commercialized by the NSO group, which affected thousands of people in 45 countries across four continents.

The High Commissioners call came asher office, OHCHR,published a report that analyses how AI affects peoples right to privacy and other rights, including the rights to health, education, freedom of movement, freedom of peaceful assembly and association, and freedom of expression.

The document includes an assessment of profiling, automated decision-making and other machine-learning technologies.

The situation is dire said Tim Engelhardt, Human Rights Officer, Rule of Law and Democracy Section, who was speaking at the launch of the report in Geneva on Wednesday.

The situation has not improved over the yearsbut has become worsehe said.

Whilst welcoming the European Unions agreement to strengthen the rules on control and the growth of international voluntary commitments and accountability mechanisms, he warned that we dont think we will have a solution in the coming year, butthe first steps need to be taken now or many people in the world will pay a high price.

OHCHRDirector of Thematic Engagement, Peggy Hicks,added to Mr Engelhardts warning, stating it's not about the risks in future, but the reality today.Without far-reaching shifts,the harms will multiply with scale and speed and we won't know the extent of the problem.

According to the report, States and businesses often rushed to incorporate AI applications, failing to carry out due diligence. It states that there have been numerous cases of people being treated unjustlydue toAImisuse, such as being denied social security benefits because of faulty AI tools or arrested because of flawed facial recognitionsoftware.

The document details how AI systems rely on large data sets, with information about individuals collected, shared, merged and analysed in multiple and often opaque ways.

The data used to inform and guide AI systems can be faulty, discriminatory, out of date or irrelevant, it argues, adding that long-term storage of data also poses particular risks, as data could in the future be exploited in as yet unknown ways.

Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared and used is one of the most urgent human rights questions we face, Ms. Bachelet said.

The report also stated that serious questions should be raised about the inferences, predictions and monitoring by AI tools, including seeking insights into patterns of human behaviour.

It found that the biased datasets relied on by AI systems can lead to discriminatory decisions, which are acute risks for already marginalized groups. This is whythere needs to be systematic assessment and monitoring of the effects of AI systems to identify and mitigate human rights risks,she added.

An increasingly go-to solution for States, international organizations and technology companies are biometric technologies, which the report states are an area where more human rights guidance is urgently needed.

These technologies, which include facial recognition, are increasingly used to identify people in real-time and from a distance, potentially allowing unlimited tracking of individuals.

The report reiterates calls for a moratorium on their use in public spaces, at least until authorities can demonstrate that there are no significant issues with accuracy or discriminatory impacts and that these AI systems comply with robust privacy and data protection standards.

The document also highlights a need for much greater transparency by companies and States in how they are developing and using AI.

The complexity of the data environment, algorithms and models underlying the development and operation of AI systems, as well as intentional secrecy of government and private actors are factors undermining meaningful ways for the public to understand the effects of AI systems on human rights and society, the report says.

We cannot afford to continue playing catch-up regarding AI allowing its use with limited or no boundaries or oversight and dealing with the almost inevitable human rights consequences after the fact.

The power of AI to serve people is undeniable, but so is AIs ability to feed human rights violations at an enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us, Ms. Bachelet stressed.

See more here:

Urgent action needed over artificial intelligence risks to human rights - UN News

Related Posts