During the last few years, The ethical considerations of artificial intelligence (AI) have moved from being a high-level philosophical issue to becoming a tangible application need to minimize its risks and maximize its opportunities.. At the same time, the proliferation of smartphones and AI applications that we use daily, the impact of technology on all sectors (including industry, healthcare, justice, transportation, finance or entertainment), the increase in data processing capacity, has given rise to an intense and necessary debate on the ethical use of AI that has been related, in the first instance, to data biases and the so-called black boxes of AI.
Regarding the first, it is worth saying that the initial focus has been on the fact that AI systems can maintain and even amplify different negative biases towards different groups of people, such as women, the elderly, people with disabilities and also towards minority ethnic groups, racialized groups or other vulnerable groups. As a result, one of the most recurring questions, especially in the context of machine learning, has been how to avoid discrimination by AI systems. Considering that one of the main objectives of AI systems is to achieve greater efficiency, precision, scale and speed in making decisions and finding the best answers, the existence of these biases in the use of AI can not only undermine these apparently positive characteristics in various ways, but also generate a significant lack of trust, especially among the people most affected by this situation.
When we talk about biases, the first problem is that The use of data containing implicit or explicit imbalances not only reinforces a distortion in the data but also affects any decision-making, making the bias systematic.. The second problem is that an AI system can suffer from algorithmic bias due to the developer’s implicit or explicit biases. This is largely because the design of a program is based on the developer’s understanding of other people’s normative and non-normative values. Therefore, it is important to include users and affected stakeholders in the development process. The third problem has to do with the outcome or selection bias that is frequently associated with the use of historical records but which also has to do with the systematic selection of groups of people and places that become linked to certain outcomes. For example, in predicting criminal activity in certain urban areas, an AI system may end up assigning more police officers to a particular area based on historical records and a selection by the police command to monitor some areas much more than others. This logic results not only in more criminal cases being reported in an area but also in more police officers being assigned to a certain area due to the biased results of the AI system.
Therefore, if we are not careful we may find that some AI systems cause inequalities between social groups to be amplified and even more enduring. Without a doubt, for this to not happen, better social understanding of the data used in AI systems is required. For example, some AI professionals may not be aware that data about X (such as zip codes, health records, road locations) is also data about Y (such as sex or gender, ethnic group, socioeconomic status), and may think that data about X is a neutral piece of data that applies to all people equally rather than understanding that zip codes very often provide information about discrimination, inequality, and social segregation. Without a doubt, indirect discrimination, where variables that we do not think would be sensitive to the proxy, such as sex or gender or ethnic group, pose a great challenge. Therefore, It is first necessary to take into consideration these basic issues of social structure, identifying as a priority the correlations between vulnerable groups, contexts and life opportunities collected in the data. .
Regarding the second point, it is worth saying that many of the AI systems currently are based on a connectivist approach, whether in computer vision, natural language processing or operations research, among others. This approach, which is very successful when it comes to learning from statistically correlated data, also has a problem, which is that it relies more on our intuition than on our understanding to explain a general idea of why they work. It is for this reason that we perceive them as black box systems, a fact that supposes a problem of transparency, explainability and, ultimately, of opacity. This is the main reason why many current AI systems are called black boxes. As we have already pointed out, this black box approach is very different from what we previously had when formal logical frameworks were the norm in symbolic AI, that is, when the learned rules were usually present in a human-readable format. With connectivist AI, it is often difficult to understand the process by which these systems reach certain solutions or predictions. For this reason, explainability is proposed as a fundamental basic ethical criterion.
The normalization of this situation is problematic, especially in democratic systems where transparency is a fundamental principle and, therefore, the implications of this inability to understand, or rather, explain, the decision-making process are profound at an individual and collective level. On the one hand, this opacity appears as an affront to a person’s dignity and autonomy when decisions about important aspects of their life are made by AI systems for which we cannot explain the reason or why. In this sense, well-documented and accessible algorithms can provide information on decision-making or solutions, increasing the transparency and accountability of the algorithmic process.
In general, the The growing importance of AI systems for numerous aspects of our daily lives calls for greater inclusion of ethical and social considerations.. And, of course, to do this we must be clear about the aspects to consider in order to apply them in a systematic and coherent way in order to exercise a continuous process of accountability on the design and use of AI. Unfortunately, there is very little guidance on how to integrate ethical and social impact considerations into innovation ecosystems. Therefore, we need to move quickly to foster an AI that is truly trustworthy, where justice and fairness refer to an AI system being deployed in a fair manner and where people are treated fairly or impartially. This includes preventing, monitoring or mitigating unwanted biases and discrimination, as well as mechanisms for appealing algorithmic decisions.
Of course, they must also be robust and safe throughout their life cycle so that they do not present safety risks since the consequences of their results or accidents from their misuse can affect both individuals and social groups and society in general. Furthermore, when we refer to the security of AI systems, we are not only referring to ensuring that the technology is safe from a technical perspective of its operation, we are also referring to the fact that AI systems do not infringe on human rights, assessing the public security risks that arise from their implementation.
Along with these considerations of transparency, justice and security, there is that of responsibility. This mainly refers to moral responsibility, which includes both the management and responsible use of user data and the responsibility of the actors involved in the development and implementation of an AI system. So, organizations need to be aware of the issues related to using poor data and be held accountable if there are harmful consequences as a result. On the other hand, responsibility has emerged as a key issue for advancing the ethical uses of AI as there is a fear that organizations may obfuscate blame or hide from their responsibility in autonomous or semi-autonomous systems.
Protecting user privacy has also become an indispensable condition for protecting individual autonomy. since it is seen as a value to defend related to data protection and data processing, as well as data personalization, transparency and supervision. In a related way, another principle is identified, which is that of autonomy and which in AI refers to the user being central to the functionality of the system, eliminating their dependence on automated decision-making models. Finally, it is important to highlight the principle of sustainability, understanding it as the ability to generate positive, long-term effects from three perspectives: social, economic and environmental.
With the general objective of demonstrating that it is possible to advance in this direction of application of 7 ethical principles, the Observatory of Ethics in Artificial Intelligence of Catalonia (OEIAC) has presented a web application and a report on the PIO model (Principles, Indicators, Observables): A proposal for organizational self-assessment on the ethical use of data and Artificial Intelligence systems, designed and developed by the OEIAC itself. The specific objectives of the PIO model are:
- Raise awareness among the different agents of the quadruple helix, that is, the presence of the four key pillars in any innovative process: universities and research centers, public administration, business, and citizens, who use data and artificial intelligence systems, on the importance of adopting fundamental ethical principles to minimize known and unknown risks and maximize opportunities.
- Identify appropriate or inappropriate actions through a self-assessment proposal based on principles, indicators and observables to value, recommend and advance the ethical use of data and artificial intelligence systems.
These objectives are based on an indisputable fact, which is the proliferation of products and services that use AI systems. Given this, public and private organizations as well as AI users and citizens in general must ensure that AI systems are not only safe from a technical point of view, but are also sustainable from an environmental and social point of view. Therefore, to foster public trust in AI technologies, growing support is needed for strategies for systematic self-assessment of ethical principles in order to materialize and value a key and transformative element in the welfare economy: the highest ethical standards by organizations when designing, developing and implementing technological solutions in general and AI systems in particular.
In this sense, the PIO model is another step to continue moving forward in this direction as it provides an integration of theory into practice of fundamental ethical principles of AI, and is built around simplicity and a key and effective question that anyone who develops, manages or directs a project where there are AI data and systems: Did we do it? In this sense, starting from an organizational social responsibility perspective, the use of a model is proposed that can be used in any phase of the life cycle of AI data and systems. This means that it is applicable in design and modeling, including planning, data collection, and model building; in development and validation, including model training and testing; and in deployment, monitoring and improvement, including problem solving. The PIO model self-assessment form, as well as the full model report, are available here. here.