Fujitsu forms AI ethics and governance office to challenge the use of AI and ML – Electropages

Posted: February 9, 2022 at 2:08 am

Recently, Fujitsu announced that it will be opening an AI ethics and governance office to address the growing concerns of AI and its use in everyday life. What challenges does AI present, what will Fujitsu do, and why is AI regulation potentially futile?

While practical AI systems have only been around for a decade, it is astonishing how far they have come. What used to be algorithms that could learn to recognise basic shapes can now collect terabytes of data, correlate different data sets that have no apparent relationship, and use predictive analysis to make decisions. For example, AI is quickly being used with sensors in industrial environments to try and predict when machinery requires repair before it breaks down. This allows production cycles to be better organised, prevent sudden breakdowns, and allow operators time to prepare.

But as with any other technology, AI also presents challenges that must be addressed before it becomes widespread; otherwise, it may be hard to draw lines that are not crossed if those lines already have been crossed.

One of the biggest challenges AI faces is its ability to make decisions autonomously. While a machine that can think for itself is great when used to predict machine failure, it is not so great when its choices affect the lives of individuals. For example, an AI could be created to automatically determine the cost of insurance services for individuals. Still, if this AI is allowed to take into account protected traits (such as race, gender, and orientation), it may be able to deploy stereotypes and unfairly characterise individuals. This matter is made worse when considering that such an AI system would not comprehend the impacts of its decision from a moral standing, and its inability to recognise that people have emotions means that its decisions would be fundamentally uncaring, cold, and logical.

Another challenge faced with AI is that its ability to think allows it to replace humans in many jobs, which could quickly give rise to massive unemployment. While many economics analysts repeatedly state that automation and AI will replace old jobs with new ones, they often lack understanding of the nature of automation and AI. If AI systems can be used to replace service sector jobs such as office, reception, and shop assistants, then there are virtually no alternative jobs available, and those jobs that are created will be for high-end engineers and robotics specialists.

AI also poses challenges concerning data privacy. For AI to improve itself (i.e., learn), it needs new data that it hasnt seen before, and such data is generated en masse by the public every day, whether through phone calls, social media, or browsing the internet. This data is extremely valuable, but it is also highly personal, and the use of this data must be done so with extreme care. If such data were to be leaked or hacked, it could potentially expose peoples conversations, search history, and preferences to the world, and such data can be used in an alarming number of ways, including blackmail.

Recently, Fujitsu announced that it will be opening a new office to address the rising concerns of AI, including its development and deployment. The new office is being headed by Junichi Arahori, and its primary goal will initially start by looking to secure the deployment of leading-edge AI technologies ethically and safely.

The ethics and practices that will be followed will be based on existing practices and legal frameworks that have already begun to be formed worldwide, including the EUs desire to introduce AI legislation that will ban its use in key applications. Taking action on AI ethics early makes it easier to prevent the misuse of AI in the future. Limiting AI applications now helps secure future designs that could require AI systems to be removed (that would come at large costs).

Until today, ethics in AI have been nothing more than a talking point, and just about anyone could integrate AI into any application without any repercussions or thought. The formation of the Fujitsu AI Ethics and Governance Office marks the start of serious AI ethics.

As much as we may want to regulate AI, a fundamental problem plagues AI development can never be ignored; if we dont do it, someone else will. Simply put, while the west introduces legislation to stop research and development into unethical AI, other nations (such as China) will use that to their advantage and develop vastly superior AI.

As this scenario is unacceptable for the west, there is no option left but to develop AI that is at least on par, which means the development of unethical AI systems. The general public would not like the idea of AI being developed with the sole purpose to identify targets via drones or AI that can predict future crimes and then use investigative powers to spy on individuals despite not actually having committed any crime.

Thus, we conclude that western governments will have to resort to developing such AI in secrete despite those very same governments outlining regulations that prevent the development and use of unethical AI. The secrete development of AI defeats the purpose of such legislation, and therefore we are left with the unsettling truth that we face one of two options; either accept that we cannot regulate AI and allow government resources to create potentially unethical AI, or create a society that outright rejects the use of AI and fight against nations that do use AI unethically in a scenario not too far from the plot of Terminator whereby nations that do use unethical AI are Skynet.

Excerpt from:

Fujitsu forms AI ethics and governance office to challenge the use of AI and ML - Electropages

Related Posts