Facial Recognition and Human Rights: A Comment

Abstract

States have increasingly taken the role of buyers of new technological solutions from large tech developers (Big Tech). This raises concern in several ways. Put simply; technology can risk amplifying existing inequalities unless used in a diligent manner. An example of this is the use of facial recognition technology by law enforcement. The technology is based on algorithms that may encode biases against certain demographic groups, particularly minorities, as training sets may represent these groups poorly. Technology is not neutral, and the use of technological tools requires that states ensure in-depth knowledge not only of its possibilities but also of its limitations – and most importantly, of its effects on human rights. Alongside this, states need to ensure effective democratic control and access to public scrutiny of their cooperation with Big Tech. This article gives a brief overview of the human rights concerns related to facial recognition technology, focusing on the inadequately regulated cooperation between states and Big Tech. 

An overview of the technology and its use

The use of facial recognition technology has been debated extensively in many parts of the world during the last couple of years. Facial recognition is based on so-called Artificial Intelligence (AI), which remains subject to many different definitions and captures individuals' unique facial features (biometric data) to identify them. The technology has many uses, ranging from verifying an image with an individual (so-called “one-to-one” comparison) to recognising facial images against large databases (”one-to-many” comparison). Facial recognition can scan material on the internet and observe and monitor individuals in public spaces. The technology can be used without a person reviewing the material (fully automated) or ensuring “human control” during or after the automated process. One-to-one comparison is used for verification (also called authentication). In these cases, the technology compares the two facial images. If the likelihood that the two images show the same person is above a certain threshold, the identity is verified.One-to-many comparisons are used for identification, which entails that an individual's facial image is compared to many other images in a database to find a possible match. Sometimes images are checked against databases, where it is known that the reference person is in the database (closed-set identification), and sometimes, where this is not known (open-set identification). In addition, categorisation entails matching general characteristics such as sex, age, and ethnic origin without necessarily identifying the individual.

The use of the technology has been met with criticism from a wide range of NGO’s and civil rights organisations and the UN because of its serious implications on the protection of human rights.

One of these concerns relates to the mostly unregulated cooperation between large surveillance companies (Big Tech) and states. Large surveillance technology companies have a global reach, and their commercial interests may collide with human rights and potential aid violations of human rights. This raises principle issues related to collaborations between State and private companies which is explored more below.

The use of facial recognition technology also raises an inherent privacy concern, and whilst this is not the focus of this article, a short overview is provided in the following section as it serves as a perspective to highlight the more structural issues related to developing and using the technology. 

36351.jpg

Inherent privacy concern in the use of facial recognition technology 

Facial recognition technology captures the unique facial features of individuals. This type of data is categorised as biometric data and the use of this (for verification, identification or categorisation) is inherent to the technology. The use of biometric data for law enforcement is regulated in different human rights frameworks and affects the right to privacy regulated inter alia in Article 17 of the International Covenant on Civil and Political Rights. The UN Human Rights Commissioner has described the use of facial recognition technology as a paradigm shift compared to regular CCTV, as it dramatically increases the capacity to identify individuals.If the technology is used for mass surveillance of larger groups, this entails serious implications for the right to privacy. Furthermore, mass surveillance use can create a so-called “chilling effect” on other rights, most notably freedom of assembly protected under Article 21 of the International Covenant on Civil and Political Rights. The UN Special Rapporteur on the Rights to Freedom of Peaceful Assembly and of Association has recommended that the use of surveillance techniques for arbitrary surveillance of individuals exercising their freedom of assembly should be prohibited. The Special Rapporteur notes that the chilling effect may be aggravated if the demonstration concerns views that differ from the majority view.

These serious human rights issues form the background against which state and Big Tech. Cooperation needs to be understood and assessed. 

Lack of clear rules in cooperation between Big Tech and states

Turning to the more structural challenges with surveillance technologies which facial recognition is a part of, state cooperation with Big Tech. can give rise to concern from a democratic and human rights perspective (see e.g. Murray 2020; and on public opinion of police use of the technology, see Bradford el al. 2020). This is the case both in relation to state purchase of surveillance technologies and state regulation of design, development and export of the technology.

The UN Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression has stated in relation to purchase that: “Governments and the private sector are close collaborators in the market for digital surveillance tools. Governments have requirements that their own departments and agencies may be unable to satisfy. Private companies have the incentives, the expertise and the resources to meet those needs. They meet at global and regional trade shows designed, like dating services, to bring them together. From there, they determine whether they are a match.”

Demystifying Digital Identity: What It Is, What It Isn't And What It Can Be

Moving on to the design and development side, human rights obligations are also lacking. Similarly, considerations on whether states can (and should) control research that contributes to surveillance technology are lacking. With regards to exports, the UN Special Rapporteur states that export controls are an important element of the effort to reduce risks caused by the surveillance industry and the repressive use of its technology but are vaguely regulated. 

Human rights concerns arise because of inadequate regulation of the cooperation between states and Big Tech. There are no international rules that effectively control the purchase, or design, development and export of surveillance technology for police purposes. While public procurement rules may refer to human rights compliance, the criteria for such compliance are vague, and no international rules ensure thorough human rights impact assessments by the state in public procurements of surveillance technology. International human rights rules or regulations do not bind private actors such as companies developing or selling surveillance technology for their part. They are encouraged to observe the UN Guiding Principles on Business and Human Rights, but ultimately, only states can be held responsible within the international human rights framework. Pursuant to the Guiding Principles, states are urged to exercise adequate oversight to meet their international human rights obligations when they contract with or legislate for companies to provide services that may have an impact on the enjoyment of human rights (For more on state responsibility and the responsibility of private actors, see e.g. Lagoutte, et. al 2016). 

The overall problem has urged the UN Special Rapporteur to recommend an immediate moratorium on the global sale and transfer of the technology from the surveillance industry until rigorous human rights safeguards are put in place to regulate such practices and guarantee that states and non-State actors use the technology in legitimate ways. 

This position provides a way forward as it acknowledges, on the one hand, the legitimate purposes which surveillance technology may serve, whilst highlighting on the other hand that these purposes can only be pursued once the regulatory framework ensures sufficient safeguards. 

Concluding observations

On a fundamental ethical level, intensive use of facial recognition for surveillance purposes can change society at large. Biometric data is data about us but does not comprise the entirety of our being. This distinction risks being lost when States control and regulate populations through classification or by extensive and constant identification in the public space. Consequently, the intensity and ubiquitous manner of biometric surveillance can risk fundamentally altering the public sphere's nature, creating monumental changes in society that are not easily mitigated. Such changes need to be identified and addressed before deploying the technology any further.

References

Akhtar, M. (2019). Police use of facial recognition technology and the right to privacy and data protection in Europe. Nordic Journal of Law & Social Research, 9(2019), 325-344.
Bradford, B., Yesberg, J. A., Jackson, J., & Dawson, P. (2020). Live Facial Recognition: Trust and Legitimacy as Predictors of Public Support for Police Use of New Technology. The British Journal of Criminology, azaa032. https://doi.org/10.1093/bjc/azaa032
Lagoutte, S., Gammeltoft-Hansen, T., & Cerone, J. (2016).Tracing the roles of soft law in human rights.Oxford University Press.
Murray, D. (2020). Using human rights law to inform states' decisions to deploy AI. AJIL Unbound,114, 158-162. doi:10.1017/aju.2020.30