January 14, 2022

To the White House Office of Science and Technology Policy:

We are writing on behalf of the Digital Welfare State & Human Rights Project, Center for Human Rights and Global Justice (CHRGJ), NYU School of Law,1 and the Institute for Law, Innovation & Technology (iLIT), Temple University, Beasley School of Law,2 as well as a group of international legal experts and civil society representatives with extensive experience studying the impacts of biometric technologies.3

We welcome the focus on human and civil rights within the Bill of Rights for an AI-Powered World (“AI Bill of Rights”) initiative and the focus on the impacts of biometric technologies.4 Where industry has long pushed for ethical principles, this is an opportunity to protect rights through binding regulations, an essential step given the existential threats such technologies pose to human rights, democracy, and rule of law. OSTP should reflect on both the substance of rights and potential barriers for enforcement. This includes striving to distinguish—as many new technology developers fail to do—between the need for innovation, new laws and new rights, and the need to fix what is broken in existing laws, rules, policies, practices, and institutions.

This response provides international and comparative information to inform OSTP’s understanding of the social, economic, and political impacts of biometric technologies,5 in research and regulation. Biometrics fuel automation globally,6 often at an accelerated, reckless pace, and these concerns transcend both political and geographic boundaries. Other powerful political actors—perceived as both peers and
competitors—are attempting to understand and regulate in this area. This is an opportunity for the United States to be a world leader in ensuring that innovation is pursued in a way that safeguards human rights, both at home and abroad.

While we look forward to a consultative and transparent process for the AI Bill of Rights, we also note that the speed with which such technologies are being deployed requires urgent action. OSTP should work to establish immediate checks on the deployment of some of the most high-risk and contested tools, including an immediate moratorium on mandatory use in critical sectors such as health, education, and welfare, allowing time and space for democratic oversight before further intractable harms emerge. Our complete recommendations can be found in Section V.

I. The need for a comprehensive federal government response

There is already significant evidence that use of biometric identification in the United States can lead to harm, disproportionately impacting communities already discriminated against on the basis of, inter alia, race, sex, and national origin. For example, facial recognition technology disproportionately misidentifies people of color; use in law enforcement thus perpetuates racial bias, false arrests, and police brutality.7 Moreover, the Department of Homeland Security’s (DHS) transnational network of biometric records, tracking, and automated profiling consistently evades scrutiny, but shows evidence of arbitrary, discriminatory, and harmful practices.8

Despite evidence of the harms of biometric technologies, regulation is woefully lacking,9 with the exception of some cities and states. 10 A significant part of the population is not covered by this patchwork of prohibitions,11 and while litigation and local regulation provide some oversight, the federal government and its contractors are not held accountable even to these inadequate standards.12 The absence of, for instance, guidance for development and use of AI by the federal government and its agencies, as well as common binding standards for private actors, risks perpetuating fragmented and insufficient rights protection. Further, the federal government has a vital role to play in regulating all biometric technologies, including those which have been in place for decades, such as fingerprint-scanning in the law enforcement and immigration contexts, as well as the extraterritorial application of technologies developed, produced, sold, and promoted by U.S. government agencies and corporations.

1 The Digital Welfare State and Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law aims to investigate systems
of social protection and assistance in countries worldwide that are increasingly driven by digital data and technologies. From NYU, Katelyn Cioffi
(katelyn.cioffi@nyu.edu), Victoria Adelmant (victoria.adelmant@law.nyu.edu), and Christiaan van Veen (cvv221@nyu.edu) contributed to this response.

2 The Temple University Institute for Law, Innovation & Technology, pursues research, instruction, and advocacy with a mission to deliver equity and inform
new approaches to innovation in the public interest. Contributors: Laura Bingham (laura.bingham@temple.edu), Ed DeLuca (edward.deluca@temple.edu),
Sarbjot Kaur Dhillon (sarbjotkd@temple.edu), and Bianca Evans (bianca.evans@temple.edu).

3 This response benefited from invaluable input from a group of international experts with deep knowledge of the impact of AI and biometric identification
technologies on human rights, including Gautam Bhatia, Yussuf Bashir (Haki na Sheria Initiative), Olga Cronin (Irish Council for Civil Liberties), Reetika Khera, Matthew McNaughton (Slashroots), Grace Mutung’u, Usha Ramanathan, and Anand Venkatanarayanan.

4 Eric Lander & Alondra Nelson, Americans Need a Bill of Rights for an AI-Powered World, WIRED, Aug. 10, 2021, https://www.wired.com/story/opinion-bill-of-rights-artificial-intelligence/.

5 Rashida Richardson & Amba Kak, Suspect Development Systems: Databasing Marginality and Enforcing Discipline, UNIV. MICH. J. L. REF., Vol. 55 (forthcoming), https://ssrn.com/abstract=3868392. (highlighting “counterproductive siloes between the Global South and Global North”) 6 Id.