Share it

The first European Union Artificial Intelligence Law known as the AI ​​Act has been a long time coming, but it is now a reality and, despite its flaws, we can categorically say that it is better to have it than to continue without (almost) any regulation on artificial intelligence. From its legislative proposal in April 2021 by the European Commission, to its adoption by Parliament in March 2024, and its approval via the Council in May 2024, a lot has happened not only in terms of content negotiation, but also at a technological level and, even more so, in terms of public awareness about the social and environmental impact of AI.

In fact, we have realized individually and collectively that far beyond raising improbable future apocalyptic concerns, they highlight that an acceleration of AI without an implementation of new AI regulation and also other regulations on rights puts at stake such fundamental issues of the social contract as labor substitution, indiscriminate use of information without compensation or the revision of the 2030 Agenda on sustainable development due to the large amount of resources used by AI (especially generative) through supercomputing and data centers that involve a high energy consumption, greenhouse gas emissions, water consumption and excessive use of scarce materials associated with the production, use and useful life of IT devices and infrastructure.

Aside from these elements of context, it can be said that the AI ​​Act is primarily a regulation of market access and products, especially for AI technologies considered to be at higher risk (eg AI technology used in the healthcare field). However, they also define a few general principles of conduct or behavior for using AI and, specifically, affected individuals are granted a particular “right” to information. This contrasts with the GDPR, which primarily regulates the behavior (processing of personal data) and rights of data subjects and not the technology that uses the data. In contrast, the AI ​​Act regulates the accompanying measures that must be implemented in the event that AI technologies are considered risky. Hence the AI ​​Act is known as one risk-based legislation, understanding risk as the combination of the probability of damage occurring and the severity of that damage. From here it establishes four categories. First, those that are considered unacceptable and refer to a limited set of uses of AI that end up being banned due to their harmful impact. Second, there are a limited number of use cases in which AI systems should be considered high-risk because they may have an adverse impact on the health, safety or fundamental rights of individuals and, for this reason, are classified as high-risk. Third, there are a number of AI systems that present limited risks due to their lack of transparency (i.e. deepfakes, synthetic content) and that will be subject to reporting and transparency requirements, particularly to foster trust. Finally, the use of systems that pose minimal risk to people (e.g. AI-assisted spam filters) and that comply with applicable legislation (e.g. GDPR) is permitted without additional legal obligations. Yes, voluntarily, it is requested that one chooses to apply ethical principles. In addition to these risk categories, the AI ​​Act provides specific rules for general purpose AI (GPAI) models and for GPAI models with “high-impact capabilities” that could pose a systemic risk and have a significant impact on the internal market. However, exceptions apply to free and open source GPAI models.

Given the complexity of implementing the new European Union AI Act,the Observatory of Ethics in Artificial Intelligence of Catalonia (OEIAC), fundamental instrument of the AI ​​strategy of Catalonia (Catalonia.AI), promoted by the Government of the Generalitat of Catalonia through the Secretariat of Digital Policies, has developed the new PIO Model (Principles, Indicators and Observables) which replaces a previous beta version that provided recommendations based on 7 fundamental ethical principles (transparency, justice, responsibility, security, privacy, autonomy and sustainability). The new assessment tool, an open resource available to everyone, is based on the previous model but has the advantage of being harmonized with the legislative requirements of the AI ​​Act and other legislation on data and rights, as well as with current ethical standards and recommendations.  In this sense, the new PIO Model is a tool that promotes compliance with current rules and regulations on risks associated with AI through an exhaustive verification process; allows identifying appropriate or inappropriate actions and raising awareness of the quadruple helix through the ethical and responsible uses of AI data and systems; and aligns with the crescent international adoption of ethical principles and high-level standards in the design, implementation and use of artificial intelligence data and systems. But the new PIO Model goes much further in the application of checklists on ethical uses.

Along with verification, the new PIO Model incorporates several resources such as catalogs and recommendations. The catalogs serve as a tool to locate and learn about specific content that includes the PIO Model. We have developed two catalogs: the first relating to the legal regulations and existing ethical recommendations that have shaped the issues raised in the model. The second catalog collects examples related to the more than one hundred and thirty questions in the model, which allow you to filter searches through real examples or specific assumptions. This structured approach ensures that ethical considerations, similar to those highlighted by frameworks such as the EU AI Law and academic guidelines on AI ethics, are integrated into the understanding and application of the content, reinforcing the importance of the legal compliance and ethical innovation in the development and implementation of AI systems. But, in addition, several pieces of legislation are also incorporated that are related to the AI ​​Act since it repeatedly states that it applies in addition to existing laws and regulations, that is, it does not limit them in any way. But it is worth saying that in addition to the catalogs, we provide ethical recommendation clauses for the purchase and acquisition of AI systems. These clauses, which incorporate legal requirements and ethical standards and recommendations, serve as guidelines for public and private organizations to incorporate into their contracting processes. These recommendation clauses revolve around the seven basic principles of the PIO Model.

Finally, we also provide information on the main characteristics that a conformity assessment must have, which is related to high-risk AI systems to demonstrate that a system meets the mandatory requirements for a Trustworthy AI (e.g. data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity and robustness); on the main characteristics that a fundamental rights impact assessment should have, which is related to high-risk AI systems to demonstrate whether or not there is an impact on fundamental rights and to notify the results to the national authority; and on the main characteristics that transparency obligations should have, taking into account that these are related to all AI systems.

If you want more information about the AI ​​Act and its application as well as about the new PIO Model that we have designed and developed from the Observatory of Ethics in Artificial Intelligence of Catalonia, you can go to our website [https://oeiac.cat]. Promoting the development of ethical artificial intelligence, which respects current legality, is compatible with our social and cultural norms, and focuses on people, is a priority of the Government of the Generalitat of Catalonia which, with the Catalonia.AI strategy, promotes the consolidation of Catalonia as an international reference center in research, innovation, generation and attraction of talent, companies and investors related to AI.

Albert Sabater
Albert Sabater Coll
Director of the Observatory of Ethics in Artificial Intelligence of Catalonia (OEIAC)

Other articles

In this context of growth and evolution of constant language models, it is important that we consider adopting and adapting small language models when developing tools based on natural language models.

Brussels, September 16, 2020. Ursula Von Der Leyen highlighted in the State of the Union speech that “Industry Data is worth its weight in gold […]

In this context of growth and constant evolution of language models, it is important that we consider adopting and adapting small language models when developing tools based on natural language models.
CIDAI