European Union’s first Artificial Intelligence Law known as the AI Act has been long overdue, but it’s already a reality, and despite its flaws, we can categorically say that it’s better to have it than not. continue without (almost) any regulation in the field of artificial intelligence. From its legislative proposal in April 2021 by the European Commission, to its adoption by Parliament in March 2024, and approval via the Council in May 2024, many things have happened not only in in terms of content negotiation, but also at a technological level and, even more so, in terms of public awareness of the social and environmental impact of AI .
Indeed, we have realized individually and collectively that far beyond raising improbable future apocalyptic concerns, they highlight that an acceleration of AI without implementation of new AI regulation and also other regulations on rights put into play such fundamental issues of the social contract as the replacement of work, the indiscriminate use of information without compensation or the revision of the 2030 Agenda on sustainable development due to the large amount of resources used by AI (especially generative) through supercomputing and data centers that involve high energy consumption, greenhouse gas emissions, water consumption, and excessive use of scarce materials associated with the production, use, and lifetime of devices and IT infrastructure.
Apart from these elements of context, it can be said that the AI Act is primarily a market access and product regulation, especially for AI technologies considered to be of higher risk (eg AI technology used in the field of health). However, they also define a few general principles of conduct or behavior for using AI and, in particular, affected individuals are granted a particular “right” to information. This is in contrast to the GDPR, which mainly regulates the behavior (processing of personal data) and rights of data subjects and not the technology that uses the data. Instead, the AI Act regulates the accompanying measures that must be implemented in case AI technologies are considered risky. Hence the AI Act is known as risk-based legislation , understanding risk as the combination of the likelihood of harm occurring and the severity of that harm. From there he establishes four categories. First, those that are considered unacceptable and refer to a limited set of AI uses that end up being banned due to their harmful impact. Second, there are a limited number of use cases where AI systems should be considered high risk because they may have an adverse impact on people’s health, safety or their fundamental rights and, for this reason, they are classified as high risk. Third, there are a number of AI systems that present limited risks due to their lack of transparency (i.e. deep fakes, synthetic content) and will be subject to reporting and transparency requirements, particularly to encourage confidence Finally, the use of systems that pose minimal risk to people (eg AI-assisted spam filters) and comply with applicable legislation (eg GDPR) is permitted without additional legal obligations. Of course, voluntarily, it is requested that you choose to apply ethical principles . Apart from these risk categories, the AI Act provides specific rules for general purpose AI models (GPAIs) and for GPAI models with “high impact capabilities” that could pose a systemic risk and have an impact significant in the internal market. However, exceptions apply to free and open source GPAI models.
Faced with the complexity of applying the new AI Law of the European Union or AI Act , the Observatory of Ethics in Artificial Intelligence of Catalonia (OEIAC), fundamental instrument of the AI strategy of Catalonia (Catalonia.AI), promoted by the Government of the Generalitat of Catalonia through the Secretariat of Digital Policies , has developed the new PIO Model (Principles, Indicators and Observables) that replaces a previous beta version that provided recommendations based on 7 fundamental ethical principles (transparency, justice, responsibility, security, privacy, autonomy and sustainability) . The new evaluation tool, an open resource available to everyone, is based on the previous model but has the advantage that it is harmonized with the legislative requirements of the AI Act and other legislation on data and rights, as well as with current ethical standards and recommendations. In this sense, the new PIO Model is a tool that favors compliance with current rules and regulations on risks associated with AI through an exhaustive verification process; allows identifying appropriate or inappropriate actions and raising awareness of the quadruple helix through ethical and responsible uses of AI data and systems; and aligns with the growing international adoption of ethical principles and high-level standards in the design, implementation and use of data and artificial intelligence systems. But the new PIO Model goes much further in the application of checklists on ethical uses.
Along with verification, the new Model PIO incorporates various resources such as catalogs and recommendations. Catalogs serve as a tool to locate and learn about specific content that includes the PIO Model. We have developed two catalogues: the first relating to the legal regulations and the existing ethical recommendations that have shaped the issues raised in the model. The second catalog collects examples related to the more than one hundred and thirty questions in the model, which allow searches to be filtered through real examples or concrete assumptions. This structured approach ensures that ethical considerations, similar to those highlighted by frameworks such as the EU AI Act and the Academic Guidelines on AI Ethics, are integrated into the understanding and application of the content, reinforcing the importance of legal compliance and ethical innovation in the development and implementation of AI systems . But in addition, several pieces of legislation are also incorporated that are related to the AI Act as it repeatedly states that it applies in addition to existing laws and regulations, that is, it does not limit them in any way. But it is worth saying that in addition to catalogs, we provide ethical recommendation clauses for the purchase and acquisition of AI systems. These clauses, which incorporate legal requirements and ethical standards and recommendations, serve as guidelines for public and private organizations to incorporate into their procurement processes. These recommendation clauses revolve around the seven core principles of the PIO Model.
Finally, we also provide information on the main characteristics that a conformity assessment should have, which is related to high-risk AI systems to demonstrate that a system meets the mandatory requirements for reliable AI (e.g. quality of the data, documentation and traceability, transparency, human supervision, accuracy, cyber security and robustness) ; on the main features that a fundamental rights impact assessment should have, which is related to high-risk AI systems to demonstrate whether or not there is an impact on fundamental rights and to report the results to the national authority; and on the main characteristics that transparency obligations should have, taking into account that these are related to all AI systems.
If you want more information about the AI Act and its application as well as about the new PIO Model that we have designed and developed from the Observatory of Ethics in Artificial Intelligence of Catalonia, you can go to our website [https ://oeiac.cat]. Promoting the development of ethical artificial intelligence, which respects current legality, is compatible with our social and cultural norms, and focuses on people, is a priority of the Government of the Generalitat of Catalonia which, with the Catalonia.AI strategy, promotes the consolidation of Catalonia as an international point of reference in research, innovation, generation and attraction of AI-related talent, companies and investors.
