Commentary: Self-driving buses and delivery robots welcomed but who do we blame if AI goes rogue in Singapore? – CNA

Posted: April 21, 2021 at 9:59 am

SINGAPORE: Earlier this year, Luda Lee, an AI (artificial intelligence)-powered chatbot, went rogue.

Created by Korean start-up Scatter Lab, Luda was designed to chat naturally with South Korean Facebook users (attracting more than 750,000 over just 20 days), and to improve based on user data.

Soon after however, Luda Lee began making bigoted and offensive comments against women, minorities, foreigners and people with disabilities.

She even randomly shared the personal data of its users. Her creators apologised but now face lawsuits over the data leaks.

While some consider this a relatively innocuous example, there are cases where more serious harms were caused by AI-made decisions.

In one high-profile example in 2018, a pedestrian was hit and killed by a self-driving Uber car whose sensors had failed to see and avoid her.

Concerns over algorithmic high-frequency trading triggering widespread financial market crashes, such as the flash crash of 2010, have also been raised.

These come as Singapore is ramping up its use of AI in all areas of life. In November 2019, the Government announced its National AI Strategy, which spells out plans to deepen Singapores use of AI to transform our economy and society.

Just this year, commuters could begin taking driverless buses at the Singapore Science Park and on Jurong Island, while on-demand delivery robots are being trialled in Punggol. Even robot dogs have been patrolling Bishan-Ang Mo Kio Park to ensure safe distancing among park-goers.

While extensive pre-trials would have been conducted, alongside safety precautions taken during their roll-out, Moores Law dictates that we ask: In the unlikely event that serious harms, whether physical, emotional or financial occur, on whom (or what) does legal blame lie, and on what basis?

FINDING THE SMOKING GUN

These questions were the focus of a recent law reform report published by the Singapore Academy of Laws Law Reform Committee, as part of a series looking at the impact of robotics and AI on the law.

As this report notes, these questions sometimes have relatively straightforward answers. For instance, if a malicious individual deliberately programmes a delivery robot to break into someones house, or disrupts the signals to a driverless bus, causing it to veer off the road and crash, most would agree that individual should be held liable.

In addition, if somebody sustains serious injury or dies from the individuals deliberate actions, some form of criminal punishment would not seem unfair. Indeed, criminal laws already exist to deal with such issues.

Save for some tweaks, present laws could still tackle cases of intentional harm, even in an AI-powered world.

WHAT HAPPENS WHEN NO HUMAN INTENDED THE HARM?

Things get trickier, however, in situations where a human did not intend the harm that arose. This is particularly so as AI systems become more autonomous, and humans roles in their operation and supervision diminish.

Already, driverless vehicles present such a conundrum today, given that they operate at speeds that may not leave users time to take control and prevent the harm. What then?

An instinctive response might be to say the entities responsible for the system should be punished. But which entities? There are usually multiple parties involved in the development and deployment of AI systems.

Should the liable party be the one that built the system; programmed the systems code; trained the system; put it on the market, or deployed the system?

Putting aside challenges of identifying harmful intent, it can be tricky to pinpoint the blameworthy party in this chain. Pinning criminal liability on all of them would also likely have a counterproductive effect of discouraging innovation the legal equivalent of using a sledgehammer to crack a nut.

This same challenge besets the use of criminal negligence laws in situations where an offender (an individual or company) carelessly causes harm, even though no harm was intended.

Does this mean that serious harms could be inflicted by robots and AI systems, with no one being held criminally liable?

Some take this view, preferring the use of regulatory penalties such as censures, improvement notices and fines to promote the safe and responsible use of AI systems. The argument goes that threatening criminal liability on those who create AI systems for harms they didnt foresee deters the development of new, potentially game-changing technologies.

Others counter some cases of harm are so serious that criminal laws are needed to ensure that someone or something is held accountable for the damage done, to reflect societys abhorrence at such harmful conduct and to set a strong deterrent.

These questions and trade-offs are matters that policymakers and society need to deeply consider.

WEIGHING THE ALTERNATIVES

One possible approach could be to impose specific duties on designated entities to take all reasonable measures necessary, to ensure the AI systems safety.

These entities would risk criminal penalties if they fail, much as worksite operators are required to ensure the safety of workers on those sites.

Another more radical solution could be to impose criminal liability on and punish the AI system itself, particularly with highly-advanced autonomous systems. After all, is it not the system that took the decision to act in a harmful way?

This approach is not unheard of: The European Parliament has suggested it be considered further, and even Saudi Arabia has recognised the robot Sophia as the worlds first robot citizen just a few years ago.

However, such a solution does appear impractical in todays legal systems, which are shaped primarily to regulate human behaviour, as well as at the present state of technology. After all, what purpose would it serve today if a driverless bus in Jurong Island or a delivery robot in Punggol were charged, convicted and sentenced to prison, fined, or even put to death?

But as AI systems become more and more sophisticated, these questions are no longer the preserve of science fiction.

A silver bullet or one-size-fits-all solution is unlikely. Different technologies and different contexts will likely require different approaches to whether, how and against whom to apply criminal law.

Policy and ethical balances will need to be struck. Whether from a legal, policy or broader societal perspective, these are not issues for tomorrow but today.

Josh Lee Kok Thong is a member of the Singapore Academy of Laws Law Reform Committees Subcommittee on Robotics and AI. He is also the co-founder of LawTech.Asia and the founding chairperson of the Asia-Pacific Legal Innovation and Technology Association. Simon Constantine was formerly Deputy Research Director, Law Reform at the SAL.

Read more:

Commentary: Self-driving buses and delivery robots welcomed but who do we blame if AI goes rogue in Singapore? - CNA

Related Posts