Previous Article The Application of the EU Artificial Intelligence Act to Facial Recognition Technologies used by Law Enforcement Agencies
Posted in Artificial Intelligence

The Application of the EU Artificial Intelligence Act to Facial Recognition Technologies used by Law Enforcement Agencies

The Application of the EU Artificial Intelligence Act to Facial Recognition Technologies used by Law Enforcement Agencies Posted on November 19, 2024

As artificial intelligence (AI) continues to make inroads into various facets of our lives, some favourably, others less so, it is not surprising that governments are trying to regulate the use of AI. As part of a project I am working on regarding facial recognition technology (FRT), I have focused on how government regulation will impact the use of FRT by law enforcement agencies and the experts who rely upon this evidence for identification or lead purposes. Chief among these regulations is the EU Artificial Intelligence Act (AI Act). This article provides an overview of how the AI Act is likely to impact the use of FRT by law enforcement agencies and experts.

The AI Act is the first comprehensive attempt by a major jurisdiction to regulate the use of AI in varied contexts. The Act is designed to prohibit the deployment of AI systems that pose an “unacceptable risk” and to regulate AI systems that are categorized as “high risk” or “limited risk”. The AI Act applies to both providers and deployers of regulated AI systems. The Act’s text was formally approved and adopted by the European Parliament and Council on 13 March 2024 and is thus EU law, though the full application of its regulatory framework will be delayed until 2026 as the agreement states that the Act should become fully effective two years after it comes into force, with some provisions delayed further than that. The AI Act will likely have significant influence on AI regulation within and beyond the European Union.

Article 2 defines an AI system as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments” and defines “risk” as “the combination of the probability of an occurrence of harm and the severity of that harm”. A “high-risk AI system” includes those listed in Annex III (Art. 6(2)), one of which is remote biometric identification (RBI) systems (not those used solely to confirm that a specific person is who they claim to be). RBI systems are defined as “an AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database”.

The Act prohibits the untargeted scraping of facial images from the internet or CCTV to create facial recognition databases (Art. 5(1)(db)). While a general prohibition also exists for facial recognition and real-time RBI systems in publicly accessible spaces, a series of safeguards and exceptions for the use of RBI systems in publicly accessible spaces for law enforcement purposes are included, subject to prior authorization and for specifically listed crimes. Notably, Art. 5(1)(d) prohibits the use of real-time RBI systems in publicly accessible places for the purpose of law enforcement unless their use is strictly necessary for (i) the targeted search for specific victims of abduction, trafficking in human beings, sexual exploitation of human beings, and the search for missing persons; (ii) the prevention of a specific, substantial, and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack; or (iii) the localisation or identification of a person suspected of having committed a criminal offence, for the purposes of conducting a criminal investigation or prosecution or executing a criminal penalty for offences referred to in Annex IIa (e.g., terrorism, trafficking, sexual exploitation, ICC crimes).

Art. 5(2) further restricts these exceptions to confirming the targeted individual’s identity while taking into account (a) the nature of the situation giving rise to the possible use of the system, in particular the seriousness, probability, and scale of the harm caused without the use of the system; and (b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability, and scale of those consequences. Further, the use of real-time RBI systems must comply with “necessary and proportionate safeguards and conditions” (e.g., temporal, geographic, and personal limitations) mandated by applicable national legislation. A fundamental rights impact assessment is required (Art. 5(2), Art. 29a) as is prior judicial (or administrative) authorization (Art. 5(3)). Further, “post-remote” RBI would only be permitted for targeted searches of people convicted of or suspected to have committed a listed crime, upon judicial or administrative authorization (Art. 29(6a)).

There can be little debate that facial recognition technologies fit within the definition of RBI systems. It is therefore reasonable to conclude that the AI Act will treat all facial recognition applications as high-risk AI systems. The significance of this classification is that it triggers the application of many subsequent provisions in the Act. Notably, Art. 9(1) requires that a risk management system be established, implemented, documented, and maintained for high-risk AI systems. The risk management system requires continuous assessment of the following, as summarized:

(a) Identification and analysis of the known and reasonably foreseeable risks that the system can pose to the health, safety or fundamental rights when used for its intended purpose.
(b) Estimation and evaluation of the risks that may emerge through use or reasonably foreseeable misuse of the system.
(c) Evaluation of other possibly arising risks based on analysis of post-market data.
(d) Adoption of appropriate and targeted risk management measures.

The risk management measures referred to in (d) must be such that the residual risks of the high-risk AI system are judged to be “acceptable”, with one of the factors to be considered being the technical knowledge, experience, education, and training expected of the deployer and the context in which the system is to be used (Art. 9(4)). Testing is required to ensure compliance with Art. 9 (Art. 9(5)).

Article 11 requires that the technical documentation of high-risk AI systems be drawn up before the system is placed on the market or put into service and kept current. Article 14(1) requires that high-risk AI systems be designed and developed such that they can be effectively overseen by a natural person when the system is being used. The purpose of the human oversight is to prevent or minimize the risks to health, safety, or fundamental rights that may emerge when the system is used for its intended purpose or misused in a reasonably foreseeable manner (Art. 14(2)). Art. 15 requires that high-risk AI systems possess a level of accuracy, robustness, and cybersecurity, and that they perform consistently in these respects.

We are still in the early stages of understanding the potential reaches of AI into law enforcement practices and as potential evidence. Given the significant impact that FRT can have on targeting and identifying people, it is desirable that there be some regulation of it. There was clearly sufficient concern about FRT within the EU that it has featured prominently in the AI Act. It will take time and experience to see if the AI Act has struck the correct balance. It is also probable that other jurisdictions will study and adopt this method of regulation for their own purposes. Therefore, the EU experience should attract a rather broad audience.