Council risks failing human rights in the AI Act

Algorights signs this letter from the civil society to the Council of the EU.

Dear Representatives of the Council of the European Union,

We write to you in advance of the E.U. Artificial Intelligence Act (AI Act) trilogue
negotiation on 6 December. In this crucial moment for the AI Act, we urge you to
effectively regulate the use of AI systems by law enforcement, migration control and
national security authorities throughout Europe.

Without meaningful regulation of the use of AI in law enforcement, the AI Act will not fulfil its promise to put people’ safety first and it will fail human rights at large.

Increasingly, in Europe and around the world, AI systems are developed and deployed
for harmful and discriminatory forms of state surveillance. From the use of biometrics for identification in public spaces, to emotion recognition, to predictive systems in criminal justice and resource allocation capacities. AI in law enforcement
disproportionately targets already marginalised communities, undermines legal and
procedural rights, and enables mass surveillance.

The Council risks severely under-regulating — and in some cases even deregulating —
uses of AI in the areas of law enforcement, migration and national security if it does not change its position. Insufficient, partial prohibitions on some of the most unacceptable and dangerous uses of AI, including biometric identification for mass surveillance purposes, emotion recognition and predictive policing, are likely to legitimise rather than limit some of the most dystopian and rights-violating surveillance practices.

Further, a blanket exemption in the regulation for AI in national security will
undermine the legislation, allowing authorities an unjustifiably broad loophole to claim that AI in law enforcement, border management, or national security should not be subject to important protections in the Act.

Lastly, exemptions to public transparency requirements for police and migration
control would leave the most “high-risk” AI systems in these areas shrouded in
secrecy – making oversight over law enforcement and migration authorities practically
impossible and regulation efforts in these areas will ultimately be meaningless. The
credibility of the E.U.’s AI Act is hanging by a thread.

Civil society, experts and institutions including the European Data Protection Board
and European Data Protection Supervisor, and the UN’s High Commission for Human
Rights have clearly outlined that the use of AI in these areas warrants a greater
degree of regulation, oversight, and protection, not less. The use of AI in law
enforcement in these areas exacerbates an already profound power imbalance
between state authorities and people, and puts fundamental rights at further risk.

To set a high global standard for human rights-centred AI regulation, ensure
public trust in AI and safeguard human rights in the face of this fast-
developing technology, E.U. member states must change course.
The Council
must implement a full ban on remote biometric identification in publicly accessible
spaces, as well as other unacceptable uses of AI in law enforcement. The AI Act must
avoid arbitrary exemptions for national security, migration control and law
enforcement, and it must ensure full accountability to the public for the uses of the
most “high risk” AI.

Appropriate checks and balances on state and police powers are essential to the
functioning of a rights-based society. Artificial intelligence, particularly in law
enforcement, migration and security poses unique threats for fundamental rights,
safety and society. E.U. legislation must be up to the challenge.

We call on you, as representatives of the European Member States, to make the AI Act
an instrument of protection and not an enabler of mass and discriminatory
surveillance.

Sincerely,

  • Access Now
  • European Digital Rights (EDRi)
  • Border Violence Monitoring Network
  • Algorights
  • AlgorithmWatch
  • Amnesty International
  • Bits of Freedom
  • European Center for Not-for-Profit Law (ECNL)
  • European Network Against Racism (ENAR)
  • Fair Trials
  • Hermes Center for Transparency and Digital Human Rights
  • Irish Council for Civil Liberties (ICCL)
  • Lafede.cat – Organitzacions per la Justícia Global
  • Platform for International Cooperation on Undocumented Migrants (PICUM)
  • Politiscope
  • Privacy International