The risk of unreliable standards: Cybersecurity and the Artificial Intelligence Act

Federica Casarosa, Scuola Superiore Sant’Anna, Pisa, Italy

PUBLISHED ON: 29 Feb 2024

Declaration

This work was based on the research developed in the framework of PNRR/NextGenerationEU project "Biorobotics Research and Innovation Engineering Facilities “IR0000036” – CUP J13C22000400007".

Why should we bother about cybersecurity in AI systems?

According to statistics, a cyberattack currently occurs every 39 seconds, and this trend will only increase and become more and more individualised. Malware attacks have blocked the activities of companies, universities, hospitals, and individuals. Europe (and national states) have started to adopt strategies and regulations that aim at mitigating and responding to such attacks: the Directive on Measures for a High Common Level of Cybersecurity across the Union (NIS Directive) in 2016 was the first legislation that addressed this topic at European level (with an updated version in 2022, NIS 2), and it was followed by the Cybersecurity Act in 2019 creating a framework for ICT cybersecurity certification.

If, on the one hand, awareness of security risks has increased both from the institutional and individual perspectives, on the other, cyber-attacks have also multiplied, including supply chain attacks targeting cyberespionage, ransomware, or complete disruption of services. ENISA, the EU Agency for Cybersecurity that has the task of monitoring and assessing cybersecurity risks, affirmed that cyberattacks are addressing critical emerging areas such as the Internet of Things (IoT), 5G communications, Machine Learning (ML) and Artificial Intelligence (AI). Given the increased importance and pervasiveness of AI systems in the market, ENISA addressed a topic-specific study on the cybersecurity risks of this technology. According to the Study on AI cybersecurity risks, the main ones are the following:

  • Nefarious activity/abuse
  • Eavesdropping/Interception/hijacking
  • Physical attacks
  • Unintentional Damage
  • Failures or malfunctions
  • Outages
  • Disasters
  • Legal

One may think that the awareness of the risks, as well as the expertise collected by ENISA, were exploited by the drafters of the recently adopted AI Act to avoid the most evident cybersecurity threats and adopt preventive and mitigation measures able to enhance the resilience of AI systems; yet, looking at the text of the AI Act it seems that the flow of information either has not occurred or was severely limited. The AI Act relies on standardisation and certification based on such standards, but to be resilient to cybersecurity attacks, the content and the procedure to adopt the standards should be updated, comprehensive, transparent, and trustworthy. Looking at the adopted text, none of these adjectives fits well: (cybersecurity) requirements are not sufficiently detailed, experts — such as ENISA — are confined to the advisory board, standards are not discussed with relevant stakeholders, and no update is envisaged for the standards. If this is how standards are adopted, the AI systems certified upon them will never be trust- and transparency-enhancing instruments for users and deployers.

Certifying AI systems with the CE label

The AI Act sets up a detailed organisational structure requiring Member States to establish a certification network, including notifying authorities and conformity assessment bodies. Both are part of the process of issuing CE labels to high-risk AI systems that have passed the conformity assessment. The requirements are defined in Articles 8-15 AI Act and are relevant to any AI system developer and manufacturer. This certification process, however, needs to include more details and stakeholder involvement, and improvements are required to uphold the goal of certification mechanisms as trust-enhancing and transparency-enhancing instruments for users and deployers. These improvements are relevant from the cybersecurity perspective and, more generally, from the overall effectiveness of the certification mechanism.

Examining the AI Act through the lens of cybersecurity, the references to this issue are scattered across the text, with only two main articles discussing the technical requirements to be adopted to ensure the resilience of AI systems (Article 15) and the obligations for providers of general purpose AI models with systemic risks (Article 52d). Even with these two articles, the AI Act falls short of adequately accounting for cybersecurity. Article 15 focuses only on the first step of AI design, namely the model training. It fails to account for the fact that AI systems are subject to attacks during deployment. For instance, AI systems that continue to learn after being placed on the market run the risk of model extraction attacks. In this case, the attacker reveals the model's architecture and parameters to reproduce a (near)-equivalent machine learning model (by observing the model's predictions and/or execution time). In this case, the output is not ‘biased’ but is substituted by the attacker model.

The obligations of providers of general-purpose AI models with systemic risks should be more detailed. Unfortunately, this is not the case, as Article 52d only briefly mentions that an adequate level of cybersecurity should be ensured. Leaving the task of defining the ‘adequate level’ to codes of practice and, if available, harmonised standards. In this case, more guidance comes from the recital, namely Recital (60r), which not only refers to the entire model cycle but also includes a long list of risks that may emerge, ranging from accidental model leakage to unsanctioned releases to circumvention of safety measures, to defence against cyberattacks, to unauthorised access or model theft. It is still premature to evaluate if and how the drafters of the code of practice of the harmonised standards consider the guidance.

ENISA’s in, but is it really?

The strategy adopted by the AI Act to improve the participation of stakeholders and collect relevant expertise is the creation of an Advisory forum to advise and provide technical expertise to the Board and the Commission (AI Act, Art. 58a). Among the permanent members, we can find the most relevant authorities, including ENISA along with the Fundamental Rights Agency, the European Committee for Standardization (CEN), the European Committee for Electrotechnical Standardization (CENELEC), and the European Telecommunications Standards Institute. Here comes the tricky part: CEN and CENLEC, which are part of the Advisory Board consulted on the standardisation request, are simultaneously the receivers of such request (AI Act, Art. 40 (2)). Can this situation affect the leverage power of CEN and CENLEC in defining how the standard should be drafted? How do the other authorities (and other stakeholders) involved in the Advisory Board steer the choices regarding the respective fields of expertise (cybersecurity and fundamental rights)? Can the Commission disregard the opinion if CEN and CENLEC will draft the harmonised standard? These remain open questions.

It is a pity that the amendments suggested by the European Parliament were not adopted in the final text. In this case, a more prominent role was allocated to ENISA, which was supposed to be part of a direct dialogue with the AI Board to address any emerging cybersecurity issues across the internal market. No specific guideline was indeed provided as regards the role, the forms of communication and the collaboration of ENISA; still, this provision would have been crucial, as it would allow the AI board to establish a liaison with the European agency that is devoted to study and analyse cybersecurity issues and challenges on a broader scale.

Is there stakeholder participation?

As said, the AI Act relies on standards, which can take the form of technical harmonised standards (AI Act, Art. 40) or, in case of absence of the former, of common (technical) specifications adopted by the Commission (AI Act, Art. 41).

Although the common technical specifications are ‘an exceptional fall back solution’ (Recital 61), this does not mean the process of producing them should be less trustworthy. The final version of Art. 41 is far from being satisfactory. The procedure includes only a consultation with the Advisory board. Then, the Commission is free to decide the content of the common specifications entirely autonomously. No involvement of expert groups, participation of other stakeholders, or communication with the AI board or the public is envisaged in any phase of the drafting. Compared to the process envisaged in adopting European Standards, this is entirely untransparent and non-participatory. The absence of means and occasions for the effective participation of relevant stakeholders is a problem. Still, it becomes even more questionable when it is taken for granted that the internal committees of the Commission have expertise and knowledge on specific technical and organisational issues addressing cybersecurity threats.

Moreover, it is evident that technology development sprints: what is updated today may already be outdone tomorrow. Shouldn’t it also be true for cybersecurity standards where missing a software update leads to exploitable vulnerabilities? Yet, a timeline for revising the common specifications is missing. A deadline for reconsidering such standards and common specifications to account for technical developments and emerging cybersecurity threats is crucial to avoid the risk of rapidly outdated standards.

References

Artificial Intelligence Act Amendments P9_TA(2023)0236. (2021). Artificial Intelligence Act. Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)). European Parliament and Council. https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.html

Casarosa, F. (2022). Cybersecurity certification of artificial intelligence: A missed opportunity to coordinate between the Artificial Intelligence Act and the Cybersecurity Act. International Cybersecurity Law Review, 3(1), 115–130. https://doi.org/10.1365/s43439-021-00043-6

Proposal COM/2021/206. (2021). Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts. European Parliament and Council. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206

The European Union Agency for Cybersecurity (ENISA). (2020). AI cybersecurity challenges – Threat landscape for artificial intelligence [Report]. https://www.enisa.europa.eu/publications/artificial-intelligence-cybersecurity-challenges

Add new comment