How the EU AI Act regulates artificial intelligence: What it means for cybersecurity – CSO Online

According to van der Veer, organizations that fall into the categories above need to do a cybersecurity risk assessment. They must then adhere to the standards set by either the AI Act or the Cyber Resilience Act, the latter being more focused on products in general. That either-or situation could backfire. People will, of course, choose the act with less requirements, and I think thats weird, he says. I think its problematic.

When it comes to high-risk systems, the document stresses the need for robust cybersecurity measures. It advocates for the implementation of sophisticated security features to safeguard against potential attacks.

Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behavior, performance or compromise their security properties by malicious third parties exploiting the systems vulnerabilities, the document reads. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g., data poisoning) or trained models (e.g., adversarial attacks), or exploit vulnerabilities in the AI systems digital assets or the underlying ICT infrastructure. In this context, suitable measures should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.

The AI Act has a few other paragraphs that zoom in on cybersecurity, the most important ones being those included in Article 15. This article states that high-risk AI systems must adhere to the security by design and by default principle, and they should perform consistently throughout their lifecycle. The document also adds that compliance with these requirements shall include implementation of state-of-the-art measures, according to the specific market segment or scope of application.

The same article talks about the measures that could be taken to protect against attacks. It says that the technical solutions to address AI-specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve, and control for attacks trying to manipulate the training dataset (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws, which could lead to harmful decision-making.

What the AI Act is saying is that if youre building a high-risk system of any kind, you need to take into account the cybersecurity implications, some of which might have to be dealt with as part of our AI system design, says Dr. Shrishak. Others could actually be tackled more from a holistic system point of view.

According to Dr. Shrishak, the AI Act does not create new obligations for organizations that are already taking security seriously and are compliant.

Organizations need to be aware of the risk category they fall into and the tools they use. They must have a thorough knowledge of the applications they work with and the AI tools they develop in-house. A lot of times, leadership or the legal side of the house doesnt even know what the developers are building, Thacker says. I think for small and medium enterprises, its going to be pretty tough.

Thacker advises startups that create products for the high-risk category to recruit experts to manage regulatory compliance as soon as possible. Having the right people on board could prevent situations in which an organization believes regulations apply to it, but they dont, or the other way around.

If a company is new to the AI field and it has no experience with security, it might have the false impression that just checking for things like data poisoning or adversarial examples might satisfy all the security requirements, which is false. Thats probably one thing where perhaps somewhere the legal text could have done a bit better, says Dr. Shrishak. It should have made it more clear that these are just basic requirements and that companies should think about compliance in a much broader way.

The AI Act can be a step in the right direction, but having rules for AI is one thing. Properly enforcing them is another. If a regulator cannot enforce them, then as a company, I dont really need to follow anything - its just a piece of paper, says Dr. Shrishak.

In the EU, the situation is complex. A research paper published in 2021 by the members of the Robotics and AI Law Society suggested that the enforcement mechanisms considered for the AI Act might not be sufficient. The experience with the GDPR shows that overreliance on enforcement by national authorities leads to very different levels of protection across the EU due to different resources of authorities, but also due to different views as to when and how (often) to take actions, the paper reads.

Thacker also believes that the enforcement is probably going to lag behind by a lot for multiple reasons. First, there could be miscommunication between different governmental bodies. Second, there might not be enough people who understand both AI and legislation. Despite these challenges, proactive efforts and cross-disciplinary education could bridge these gaps not just in Europe, but in other places that aim to set rules for AI.

Striking a balance between regulating AI and promoting innovation is a delicate task. In the EU, there have been intense conversations on how far to push these rules. French President Emmanuel Macron, for instance, argued that European tech companies might be at a disadvantage in comparison to their competitors in the US or China.

Traditionally, the EU regulated technology proactively, while the US encouraged creativity, thinking that rules could be set a bit later. I think there are arguments on both sides in terms of what ones right or wrong, says Derek Holt, CEO of Digital.ai. We need to foster innovation, but to do it in a way that is secure and safe.

In the years ahead, governments will tend to favor one approach or another, learn from each other, make mistakes, fix them, and then correct course. Not regulating AI is not an option, says Dr. Shrishak. He argues that doing this would harm both citizens and the tech world.

The AI Act, along with initiatives like US President Bidens executive order on artificial intelligence, are igniting a crucial debate for our generation. Regulating AI is not only about shaping a technology. It is about making sure this technology aligns with the values that underpin our society.

Link:

How the EU AI Act regulates artificial intelligence: What it means for cybersecurity - CSO Online

Related Posts

Comments are closed.