Madrid, May 17th, 2023
We, the undersigned organizations, are writing to draw your attention to a number of serious deficiencies in the Artificial Intelligence Act (AI Act) (COM/2021/206), currently being negotiated by the Swedish Presidency of the Council of the European Union and soon by the
Spanish Presidency.
This letter is based on the position of 123 civil society organizations calling on the European Union to foreground a fundamental rights-based approach to the AI Act in 2021. The AI Act is a fundamental piece of legislation that will have a strong impact on the EU population and, in all likelihood, also beyond our borders.
Some of our concerns relate to dangerous practices that lead to mass surveillance of the population. By fostering mass surveillance and amplifying some of the deepest social inequalities and power imbalances, AI systems are seriously jeopardizing our fundamental rights and democratic processes and values.
In particular, we draw your attention to essential protections of fundamental rights that can be further enhanced during the so-called trilogue negotiations. Given the role that the Spanish government will play as of July, the 1st, 2023 by assuming the rotating Presidency of the Council, and the interest already shown in leading the Artificial Intelligence race in Europe, we believe that the Spanish Presidency should take an assertive role and ensure that this legislation is future-proof and respectful of fundamental rights.
We call on the incoming Spanish Presidency and Member States to ensure that the following features are reflected in the final text of the AI Act:
1- Expand the list of exhaustive prohibitions of those AI systems that pose an «unacceptable risk» to fundamental rights. It is vital that the Article 5 list of «prohibited AI practices» be expanded to cover all systems that are shown to pose an unacceptable risk of violating fundamental rights and freedoms of individuals, also affecting more generally the founding principles of democracy. At the very least, the undersigned organizations believe that the following practices should be banned altogether:
- Remote biometric identification (RBI) in public space that applies to all agents, not just law enforcement, as well as its uses both «in real time» and «ex post» (no exceptions).
- The use of AI systems by administrative, law enforcement and judicial authorities to make predictions, profiles, or risk assessments for the purpose of crime prediction.
- The use of AI-based individual risk assessment and profiling systems in the context of migration; predictive analytics for interdiction, restriction, and migration prevention purposes; and AI polygraphs for migration prevention.
- The use of emotion recognition systems that aim to infer people’s emotions and mental states.
- The use of biometric categorization systems to track, categorize and judge people in publicly accessible spaces, or to categorize people based on protected characteristics.
- The use of AI systems that may manipulate people or take advantage of contexts or situations of vulnerability, in a way that may cause or have the effect of causing harm to them.
2- Eliminate any type of discretionality in the process of classifying high-risk systems (article 6). The general approach of the Council and the negotiations in the European Parliament tend to add layers to the risk classification process by requiring additional requirements for the systems listed in Annex III to be considered high risk. This would seriously complicate the AI Act, give excessive discretion to providers to decide whether the system is high risk or not, and it would also compromise legal certainty and lead to a large fragmentation in the application of the AI Act. Therefore, we request that:
- All AI systems listed in Annex III of the AI Act to be considered high risk, without further ado.
3- Significant accountability and public transparency obligations on public uses of AI systems and on all «deployers» of high-risk AI. To ensure the highest level of protection of fundamental rights, those who deploy high-risk AI systems (i.e. «users» » as they are referred to
in the original European Commission proposal and in the general approach of the EU Council or «deployers» as might be agreed in the European Parliament) should provide public information on the use of such systems. This information is crucial for public accountability, as it allows public interest organizations, researchers and affected individuals to understand the context in which high-risk systems are used. The AI Act should include the following obligations for deployers:
- Obligation of deployers to register all high-risk AI systems in the Article 60 database.
- The obligation of public authorities, or those acting on their behalf, to register all uses of AI systems in the database of Article 60, regardless of the level of risk.
- The obligation of deployers of high-risk AI systems to conduct and publish in the Article 60 database a fundamental rights impact assessment (FRIA) prior to deploying any high-risk AI system.
- The obligation of deployers of high-risk AI systems to involve civil society and other affected parties in the fundamental rights impact assessment that they must carry out.
4- Rights and redress mechanisms to empower people affected by AI systems. While we have seen some positive steps by recognizing, for example, in the general approach of theCouncil the possibility of filing complaints with public authorities in case of non-compliance with the AI Act, we believe that it is also necessary to recognize other basic rights that enable people affected by AI systems to understand, challenge and obtain redress. Therefore, we understand that the final text of the AI Act should include:
- The right to receive, upon request, a clear and intelligible explanation, including to children and in the language requested, in a manner accessible to persons with disabilities, of decisions made with the aid of systems within the scope of the AI Act and how such systems operate, including the right to object to such decisions.
- The right of individuals to lodge a complaint with national authorities or to take legal action in court when an AI system or practice affects them.
- The right to an effective judicial remedy against the national supervisory authority or the deployer to enforce rights recognized under the AI Act that have been violated.
- The right to access to collective redress mechanisms.
- The right of public interest organizations to file a complaint with a control authority for non-compliance with the AI Act or for AI systems that violate fundamental rights or the public interest, and the right of individuals to be represented by such organizations in the protection of their rights.
5- Technical standards should not address issues related to fundamental rights and should include more civil society voices in their elaboration process. Civil society is concerned because a large part of the implementation of the AI Act -whose risk-based approach leaves most AI systems almost unregulated (with the exception of high-risk systems, additional transparency obligations for some AI systems, and the debate on generative AI that has recently taken place)- will depend on the development of technical standards and their implementation by manufacturers. Furthermore, we should consider that the standardisation processes are heavily dominated by the industry, considering their complexity. The undersigned organizations state that it is not clear how these standards could impact on the fundamental rights of individuals (e.g., regarding the absence of bias in AI systems). Therefore, we believe it is necessary:
- That technical standards should not be used to evaluate the possible impact on the fundamental rights of individuals.
- To provide the necessary means and resources to ensure greater participation of civil society in the standardisation processes related to the AI Act, which are already taking place.
We, the undersigned organizations request:
(1) The organization of a high-level meeting with representatives of civil society before the beginning of the Spanish Presidency of the Council of the European Union, to ensure that fundamental rights are adequately strengthened and protected in the trilogues
elated to the AI Act.
(2) Obtain assurances from the Spanish Government on how it expects the highest levels of fundamental rights protection to be achieved in the final text of the AI Act, as we have previously noted in this letter.
Letter promoted by:
Lafede.cat, Algorace, Fundación Eticas, CIVIO, Observatorio Trabajo, Algoritmos y Sociedad; Algorights, Institut de Drets Humans de Catalunya, CECU.
Signed by:
- #Lovetopia
- AAV – Associació Arxivers i Gestors de Documents Valencians
- Access Info Europe
- ACICOM (Associació Ciutadania i Comunicació)
- ACREDITRA
- Acurema, Asociación de Consumidores y Usuarios de Madrid
- AlgoRace
- Algorights
- Amnistía Internacional
- Archiveros Españoles en la Función Pública (AEFP)
- Asociación Científica ICONO 14
- Asociación de Consumidores de Gran Canaria -ACOGRAN-CECU
- ASOCIACIÓN DE USUARIOS DE LA COMUNICACIÓN (AUC)
- Associacions Federades de Famílies d’Alumnes de Catalunya (aFFaC)
- Calala Fondo de Mujeres
- Col·legi de Professionals de la Ciència Política i de la Sociologia de Catalunya
- CooperAcció
- DigitalFems
- Ecos do Sur
- Edualter
- EKA/ACUV ASOCIACION DE PERSONAS CONSUMIDORAS Y USUARIAS VASCA
- Elektronisk Forpost Norge
- enreda.coop
- Eticas
- European Center for Not-for-profit Law (ECNL)
- European Digital Rights (EDRi)
- European Network Against Racism (ENAR)
- Fair Trials
- Federación de Consumidores y Usuarios CECU
- Federación de Sindicatos de Periodistas (FeSP)
- Fundación Ciudadana Civio
- Fundación Hay Derecho (Hay Derecho Foundation)
- FUNDACIÓN PLATONIQ
- Grup Eirene
- Hay Derecho
- Institut de Drets Humans de Catalunya
- Irídia – Centre per la Defensa dels Drets Humans
- Irish Council for Civil Liberties (ICCL)
- Komons
- La Coordinadora de Organizaciones para el Desarrollo
- Lafede.cat – Organitzacions per la Justícia Global
- Mujeres Supervivientes de violencias de género.
- Novact
- Observatori del Treball, Algoritme i Societat (TAS)
- Observatori DESC
- Oxfam Intermon
- Panoptykon Foundation
- PLATAFORMA EN DEFENSA DE LA LIBERTAD DE INFORMACION (PDLI)
- Political Watch
- Reds – Red de solidaridad para la transformación Social (Barcelona – Catalunya)
- Rights International Spain
- SED Catalunya – Solidaritat Educació Desenvolupament
- SETEM Catalunya
- SOS Racisme Catalunya
- Statewatch
- SUDS – Associació Internacional de Solidaritat i Cooperació
- THE SCHOOL OF WE
- Unión de Consumidores de Las Palmas -UCONPA-CECU
- Unión Profesional
- Universidad Nacional de Educación a Distancia (UNED)
- Xnet