iLIT & NYU Response to White House AI Bill of Rights Initiative Draws on Global Practice

Response to Request for Information from the White House Office of Science and Technology Policy on a Bill of Rights for an AI-Powered World

We are writing on behalf of the Digital Welfare State & Human Rights Project, Center for Human Rights and Global Justice (CHRGJ), NYU School of Law,11 The Digital Welfare State and Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law aims to investigate systems of social protection and assistance in countries worldwide that are increasingly driven by digital data and technologies. From NYU, Katelyn Cioffi (, Victoria Adelmant (, and Christiaan van Veen ( contributed to this response. and the Institute for Law, Innovation & Technology (iLIT), Temple University, Beasley School of Law,2 2 The Temple University Institute for Law, Innovation & Technology, pursues research, instruction, and advocacy with a mission to deliver equity and inform new approaches to innovation in the public interest. Contributors: Laura Bingham (, Ed DeLuca (, Sarbjot Kaur Dhillon (, and Bianca Evans (  as well as a group of international legal experts and civil society representatives with extensive experience studying the impacts of biometric technologies.33 This response benefited from invaluable input from a group of international experts with deep knowledge of the impact of AI and biometric identification technologies on human rights, including Gautam Bhatia, Yussuf Bashir (Haki na Sheria Initiative), Olga Cronin (Irish Council for Civil Liberties), Reetika Khera, Matthew McNaughton (Slashroots), Grace Mutung’u, Usha Ramanathan, and Anand Venkatanarayanan. 

We welcome the focus on human and civil rights within the Bill of Rights for an AI-Powered World (“AI Bill of Rights”) initiative and the focus on the impacts of biometric technologies.44 Eric Lander & Alondra Nelson, Americans Need a Bill of Rights for an AI-Powered World, WIRED, Aug. 10, 2021, bill-of-rights-artificial-intelligence/.

2020_02_21 IDEAS Hub Opening and Ribbon Cutting

Where industry has long pushed for ethical principles, this is an opportunity to protect rights through binding regulations, an essential step given the existential threats such technologies pose to human rights, democracy, and rule of law. OSTP should reflect on both the substance of rights and potential barriers for enforcement. This includes striving to distinguish—as many new technology developers fail to do—between the need for innovation, new laws and new rights, and the need to fix what is broken in existing laws, rules, policies, practices, and institutions.

This response provides international and comparative information to inform OSTP’s understanding of the social, economic, and political impacts of biometric technologies,55 Rashida Richardson & Amba Kak, Suspect Development Systems: Databasing Marginality and Enforcing Discipline, UNIV. MICH. J. L. REF., Vol. 55 (forthcoming), (highlighting “counterproductive siloes between the Global South and Global North”) in research and regulation. Biometrics fuel automation globally,66 Id. often at an accelerated, reckless pace, and these concerns transcend both political and geographic boundaries. Other powerful political actors—perceived as both peers and competitors—are attempting to understand and regulate in this area. This is an opportunity for the United States to be a world leader in ensuring that innovation is pursued in a way that safeguards human rights, both at home and abroad. 

While we look forward to a consultative and transparent process for the AI Bill of Rights, we also note that the speed with which such technologies are being deployed requires urgent action. OSTP should work to establish immediate checks on the deployment of some of the most high-risk and contested tools, including an immediate moratorium on mandatory use in critical sectors such as health, education, and welfare, allowing time and space for democratic oversight before further intractable harms emerge. Our complete recommendations can be found in Section V. 

I. The need for a comprehensive federal government response 

There is already significant evidence that use of biometric identification in the United States can lead to harm, disproportionately impacting communities already discriminated against on the basis of, inter alia, race, sex, and national origin. For example, facial recognition technology disproportionately misidentifies people of color; use in law enforcement thus perpetuates racial bias, false arrests, and police brutality.77 See Joy Adowaa Buolamwini, Gender shades: intersectional phenotypic and demographic evaluation of face datasets and gender classifiers, 2017,; Patrick Grother, Mei Ngan & Kayee Hanaoka, Face recognition vendor test part 3: demographic effects, NIST IR 8280, 2019,, the Department of Homeland Security’s (DHS) transnational network of biometric records, tracking, and automated profiling consistently evades scrutiny, but shows evidence of arbitrary, discriminatory, and harmful practices.88 Ryan Calo & Danielle K. Citron, The Automated Administrative State: A Crisis of Legitimacy, 70 EMORY L. J. 797, 830, 2021, (finding that no-fly algorithms are unable to distinguish names, and that rules are not disclosed under executive and state secrets privileges); Sam Biddle & Maryam Saleh, Little-Known Federal Software Can Trigger Revocation of Citizenship, INTERCEPT, Aug. 25, 2021,; Richardson & Kak, supra note 5.

Despite evidence of the harms of biometric technologies, regulation is woefully lacking,99 Todd Feathers, Why It’s So Hard to Regulate Algorithms, MARKUP, Jan. 4, 2022, algorithms. with the exception of some cities and states.1010 Facial recognition has been banned or restricted across many cities and several states: see Fight for the Future, Map, Ban Facial Recognition , (last visited Jan. 13, 2022). See also No Biometric Barriers to Housing Act of 2021, H.R. 4360, 117th Cong. (2021–22). A significant part of the population is not covered by this patchwork of prohibitions,1111 Tom Simonite, Face Recognition is Being Banned—But It’s Still Everywhere, WIRED, Dec. 22, 2021, banned-but-everywhere/. and while litigation and local regulation provide some oversight, the federal government and its contractors are not held accountable even to these inadequate standards.1212 Calo & Citron, supra note 8, at 815 (citing the APA’s restrictions on challenging federal agency action). The absence of, for instance, guidance for development and use of AI by the federal government and its agencies, as well as common binding standards for private actors, risks perpetuating fragmented and insufficient rights protection. Further, the federal government has a vital role to play in regulating all biometric technologies, including those which have been in place for decades, such as fingerprint scanning in the law enforcement and immigration contexts, as well as the extraterritorial application of technologies developed, produced, sold, and promoted by U.S. government agencies and corporations. 

Two initial, fundamental concerns with a “Bill of Rights” approach must be highlighted, based on expert comparative legal analysis from several constitutional democracies. First, such an approach, if taken at face value as an effort to amend or modernize textually anchored rights, may exclude structural constitutional questions, such as separation of powers, the scope and quality of judicial review, and standing. Adoption of biometrics and predictive technologies increasingly concentrates power in executive agencies, inviting structural, slow-onset forms of injury.1313 See, e.g., id. at 845; Marielle Debos, Biometrics and the Disciplining of Democracy: Technology, Electoral Politics, and Liberal Interventionism in Chad, DEMOCRATIZATION 1, Mar. 31, 2021, Yet, unlike most constitutional systems, U.S. judicial review of administrative actions is structurally divorced from constitutional law and rights protection. Much relevant technology is predicated on “improving” or “modernizing” the administrative state, but “administrative law in the USA is not concerned primarily with basic rights.”1414 Vicki C. Jackson & Mark Tushnet (eds.), PROPORTIONALITY: NEW FRONTIERS, NEW CHALLENGES 111, 2017. See also David Engstrom et al., Government By Algorithm: Artificial Intelligence in Federal Administrative Agencies, 2020, Rights, however formulated, therefore risk being effectively unenforceable as executive discretion continues its extra-constitutional expansion. 

Second, the absence of a cause of action for indirect discrimination (“disparate impact”) that applies generally across different sectors and to state, local, and federal departments, as well as private actors, is concerning in this context. In contrast to proportionality tests applied in the majority of constitutional frameworks,1515 Jackson & Tushnet, Id., at 111. U.S. constitutional balancing tests are rigid and rules-driven, restricting serious scrutiny to the most obvious and intentional instances of racial discrimination.1616 Id. Though disparate impact exists as a theory of liability under some federal civil rights statutes,1717 See Texas Dep’t of Hous. & Cmty. Affs. v. Inclusive Communities Project, Inc., 576 U.S. 519 (2015). See Cass R. Sunstein, Algorithms, Correcting Biases, 86 SOC. RES.: INT’L Q. 499, 510, 2019 (noting disparate impact liability presents some of the most important issues for challenging algorithmic discrimination in the future), even here the availability of disparate impact claims is at executive agencies’ discretion in their enforcement of federal anti-discrimination laws.1818 See, e.g., HUD’s New Rule Paves the Way for Rampant Algorithmic Discrimination in Housing Decisions, NEW AM., Oct. 1, 2020, On disparate impact generally, see Tex. Dep’t of Hous. & Cmty. Affairs v. Inclusive Cmtys. Project, Inc., 576 U.S. 519 (2015). See also Griggs v. Duke Power Co., 401 U.S. 424 (1971). In most other jurisdictions, and under international treaties like the Convention on the Elimination of All Forms of Racial Discrimination, ratified by the United States in 1994, the term “indirect discrimination” is used to denote liability for discrimination based on the effect of laws and practices.1919 See Audrey Daniel, The Intent Doctrine and CERD: How the United States Fails to Meet Its International Obligations in Racial Discrimination Jurisprudence, 4 DEPAUL J. SOC. JUST. 263, 2011, See also EU Council Directive 2000/43/EC of 29 June 2000 implementing the principle of equal treatment between persons irrespective of racial or ethnic origin, 2000 O.J. (L 180) 22 (requiring EU Member States to prohibit direct and indirect discrimination on the basis of racial or ethnic origin); D.H. and Others v. the Czech Republic, App. No. 57325/00, 47 EUR. H.R. REP. 3, 2008.

The limited availability and lax enforcement of disparate impact leaves the equal protection clause gutted and insufficient to deal with AI-enabled biometric discrimination.2020 See, e.g., Mark MacCarthy, Fairness in algorithmic decision-making, BROOKINGS, 2019, decision-making/. Without providing for disparate impact claims, rights protections in the U.S. fall beneath international equality standards that the government has pledged to uphold and are not fit for purpose in an automated society which already exhibits structural bias and discrimination. 

II. International evidence provides a critical resource 

There is now a significant body of evidence that illuminates both the potential benefits and harms of biometric technologies in different contexts.2121 A 2013 survey found at least 230 instances of developmental programs using biometric identification tech. See Alan Gelb & Julia Clark, Identification for Development: The Biometrics Revolution, SSRN J., 2013, This response reflects input from leading experts who have worked in India, Jamaica, Kenya, and Ireland,2222 See supra note 3. where governments, international organizations, and private actors have used a combination of biometrics, data sets, and machine learning to mediate access to fundamental rights.2323 Biometrics must be evaluated in conjunction with related algorithms, data sets, and institutional arrangements, sometimes called the ‘biometric assemblage’. See Mirca Madianou, The Biometric Assemblage: Surveillance, Experimentation, Profit, and the Measuring of Refugee Bodies, 20 TELEVISION & NEW MEDIA 581–599, 2019, With cities and states in the United States poised to follow suit, 2424 See generally, Mizue Aizeki & Rashida Richardson, eds., Smart-City Digital ID Projects: Reinforcing Inequality and Increasing Surveillance through Corporate “Solutions”, Dec. 2021, this research provides an invaluable resource, allowing for proactive actions that anticipate and mitigate known harms. 

Most critically, evidence now extends beyond frequently raised concerns about surveillance and privacy in the context of law enforcement and national security, to encompass concerns about social rights such as health, social security, education2525 Sally Weale, ICO to Step in After Schools use Facial Recognition to Speed up Lunch Queue, GUARDIAN, Oct. 18, 2021, payments-uk. housing, and employment2626 See, e.g., Center for Human Rights and Global Justice [CHRGJ] et al., Chased Away and Left to Die: How a National Security Approach to Uganda’s National Digital ID Has Led to Wholesale Exclusion of Women and Older Persons, 2021, Chased-Away-and-Left-to-Die.pdf; Karthik Muralidharan et al., Identity Verification Standards in Welfare Programs: Experimental Evidence from India, NBER WORKING PAPER SERIES 26744, 2020,; Jean Drèze, There is an urgent need for safeguards against unfair discontinuation of social benefits, INDIAN EXPRESS, Apr. 20, 2021, schemes-pds-system-7280621/; Reetika Khera, ed., Dissent on Aadhaar: Big Data Meets Big Brother, 2018.

A recurring finding is that biometrics have potential to generate and exacerbate patterns of social exclusion, as well as direct and indirect discrimination. These technologies thus increasingly affect access, availability, affordability, and quality of fundamental public services. 

A. How do AI and biometric technologies generate exclusion and discrimination? 

Exclusion can be caused by innate problems with biometric technology. While much recent critique has focused on facial recognition and mass surveillance, the difficulties of mitigating the harmful effects of “lower-tech” solutions2727 Shoshana Amielle Magnet, Criminalizing Poverty: Adding Biometrics to Welfare, WHEN BIOMETRICS FAIL: GENDER, RACE, AND THE TECHNOLOGY OF IDENTITY 23, 2011. such as fingerprinting, should be both a warning and opportunity for learning as “novel,” more advanced technologies emerge. As with most biometrics, specific notions of “normality” are built into fingerprinting systems; “hand scanners have particular sizes and shapes, with designated places to put the fingers,” and anyone falling outside of this “norm” will struggle to authenticate.2828 Sanneke Kloppenburg & Irma van der Ploeg, Securing Identities: Biometric Technologies and the Enactment of Human Bodily Difference, 29(1) SCI. AS CULTURE 57, 62, 2020, Failure rates are significantly higher among people of color as systems are “infrastructurally calibrated to whiteness.”2929 See Shoshana Magnet, When Biometrics Fail: Gender, Race, and the Technology of Identity, 2011, at 49, fail; Simone Brown, Dark Matter: On the Surveillance of Blackness, 2015, of-Blackness; Grother et. al, supra note 7. Further, as biometric systems are probabilistic and are often designed to tolerate significant exclusion errors, relying on them to definitively identify or verify will inevitably lead to exclusion.3030 Jeremy Wickins, The Ethics of Biometrics: the Risk of Social Exclusion from the Widespread use of Electronic Identification, 13 SCI & ENGINEERING ETHICS 45–54, 2007,

Moreover, while laboratory-based testing of biometric technologies might show relatively high success rates, as was shown in a challenge to a nationwide digital ID system reliant on fingerprint authentication in Kenya, “the real-world data is very different.3131 Nubian Rights Forum & 2 others v. Attorney General & 6 others, 2020, eKLR 37 [Kenya], at para. 37, Environmental conditions, including humidity, temperature, and light exposure, impact the quality of biometric data capture.3232 See e.g., UNITED KINGDOM GOVERNMENT OFFICE FOR SCIENCE, BIOMETRICS: A GUIDE, June 15, 2018:; Ann Livingston et al., Upholding the Rights of Children: Special Considerations on the Use of Biometrics in Identity Systems, 2019, Biometrics are not immutable, as they can alter over time and degrade with age. Capture and authentication often depend on fragile, expensive hardware, as well as quality internet and electricity. Thus, digital divides—which map onto other disadvantages—can be exacerbated through AI-enabled biometrics.3333 Silvia Masiero, Biometric Infrastructures and the Indian Public Distribution System, S. ASIA MULTIDISCIPLINARY ACAD. J. 11 (2020), remains a significant issue in the United States, see Emily A. Vogels, Digital Divide Persists Even As Americans With Lower Incomes Make Gains In Tech Adoption, 2021, americans-with-lower-incomes-make-gains-in-tech-adoption/.

Consequently, when biometrics are yoked to essential services such as social security or health care, marginalization and exclusion may arise. This in turn results in decreased access to numerous fundamental entitlements, damaging physical and mental health, and impacting dignity. This has been extensively documented in India, home to the world’s largest biometric identification system, Aadhaar.3434 Over 1.2 billion people have enrolled in the Aadhaar system. Swetha Totapelly et al., State of Aadhaar Report, 2019, reports.php. 

Persistent failures to authenticate fingerprints through Aadhaar at the point of service for welfare programs, including food rations depended on by four-fifths of Indian families, has resulted in numerous deaths by starvation, families cut off from rations for weeks, and a system that increasingly punishes the poor.3535 See, e.g., Reetika Khera, These digital IDs have cost people their privacy — and their lives, WASH. POST, Aug. 9, 2018,, last visited Jan. 14, 2022; India’s High-Tech Governance Risks Leaving Behind its Poorest Citizens, ECONOMIST, Oct. 16 2021, poorest-citizens; Ursula Rao, Biometric Bodies, Or How to Make Electronic Fingerprinting Work in India, 24 BODY & SOC. 68–94, 2018, In Uganda, card readers were unable to read older persons’ fingerprints or match their biometric profile to an accurate birth date in the national ID database. Although eligible to access certain social protection programs, such as cash transfers, older persons were consistently denied access to life-saving grants because of their inability to identify and authenticate biometrically.3636 CHRGJ et al., supra note 26, at 31–33.

Even where such technologies operate as intended, their use can facilitate other forms of indirect discrimination. They can sit atop existing barriers, while introducing further requirements, and access becomes contingent on digital literacy, specific forms of personal identification3737 See Vivek Maru et al., Digital IDs Make Systemic Bias Worse, WIRED, Feb. 5, 2020, bias-worse/. reliable access to basic ICT services, or fees related to travel, administration, and lost time spent navigating the system. For instance, in India, the use of Aadhaar in public services requires networks of data operators who continuously collect and verify biometric data. Without oversight, such operators become bureaucratic bottlenecks, sites of harassment and intimidation, and an insurmountable barrier to accessing services.3838 Vyom Anil & Jean Drèze, Without Aadhaar, Without Identity, INDIAN EXPRESS, July 5, 2021, aadhaar-architecture-uidai-card-enrolment-7389133/.

B. Civil death and other cumulative, systemic impacts of biometric systems 

Taken individually, instances of exclusion may already constitute indirect discrimination. But the persistence of biometric information also means that the effects of exclusion replicate quickly, locking individuals out of multiple services. In Kenya, the United Nations High Commissioner for Refugees (UNHCR) collected biometric information to distribute food aid during a period of famine. Consequently, many Kenyans who were registered as children are victims of ‘double registration’: since their biometric data appears in a refugee database, the government denies them national ID cards, restricting access to services including employment, health care, and social security.3939 Haki na Sheria Initiative, Biometric Purgatory: How the Double Registration of Vulnerable Kenyan Citizens in the UNHCR Database Left Them at Risk of Statelessness, 2021, In Ireland, the Public Services Card (PSC), which includes collection of biometric data, rapidly expanded beyond its original role in the welfare system, with other government agencies requiring it as the sole form of ID.4040 DPC welcomes resolution of proceedings relating to the Public Services Card, Dec. 10, 2021, news/dpc-welcomes-resolution-proceedings-relating-public-services-card. This expansion was introduced without transparency, democratic debate, or adequate review of its necessity and proportionality. The use of biometrics can therefore quickly become de facto mandatory, even when not formally required. 

Any failure to authenticate or ensure that data is consistent across different systems can therefore lead to “civil death,”4141 Usha Ramanathan, Aadhaar is Like Drone Warfare Versus Hand to Hand Combat, Profiling Becomes All That More Easier, BUSINESS STANDARD, Apr. 1, 2016, that-more-eaiser-usha-ramanathan-116033101394_1.html. where an individual is cut off from all fundamental services. This is the case in Pakistan, where the government has unilaterally blocked certain individuals’ biometric digital IDs, forcing them into a vetting process to ‘prove’ aspects of their identity such as citizenship or gender.4242 Alizeh Kohari, Life in Pakistan without a digital ID, CODA STORY, Nov. 3, 2021, stateless/. In Assam, India, the government recently conducted a mass citizenship verification process,4343 Siddhartha Deb, ‘They Are Manufacturing Foreigners’: How India Disenfranchises Muslims, N.Y. TIMES, Sept. 15, 2021, placing approximately 2.7 million people on a ‘doubtful list’ of those whose citizenship is called into question. Many on this list have had their biometric profiles frozen; this means that they cannot use their Aadhaar record to receive health care, access food rations, get a drivers’ license, or register a SIM card.4444 Two Years Since NRC, Lakhs Still Remain in Limbo, HINDU, Aug. 31, 2021, remain-in-limbo/article36201266.ece.

This civil death phenomenon is especially concerning since use of biometric technologies can coincide with entrenchment of structural racism and discrimination. While the broad use of these technologies in public service delivery will ultimately affect everyone, at present harms disproportionately impact already marginalized communities; across many biometric systems, those unable to identify and verify are often those in poor, rural communities, ethnic and religious minorities, women, and older persons.4545 Totapelly et al., supra note 34.

Widespread deployment may thus exacerbate and deepen structural and institutional patterns of harm.4646 Virginia Eubanks, Automating Inequality, 9, 2018.

Beyond exclusion, the extensive use of biometrics can also fundamentally affect democracy, the rule of law, accountability and transparency,4747 See Séverine Awenengo Dalberto & Richard Banégas (eds.), Identification and Citizenship in Africa: Biometrics, the Documentary State and Bureaucratic Writings of the Self, while entrenching private sector control over public functions.4848 See generally Linnet Taylor, Public Actors Without Public Values: Legitimacy, Domination and the Regulation of the Technology Sector, PHIL. & TECH. (2021),; Julie Cohen, Between Truth and Power: The Legal Constructions of Informational Capitalism, 2019. After alleged election rigging in the 2017 Kenyan presidential election, government officials were unable to comply with judicial orders to grant access to election results data tied to a biometric voter registration system, as the vendor’s servers were in France.4949 Ken Flottman, Kenya’s IEBC announced 18 months ago that it would finally open its vote tally servers to public, but has failed to do so, AFRICOMMONS, Aug. 29, 2020,; Duncan Miriri, Kenyan opposition leader targets Safaricom staff over election, REUTERS, Sept. 27, 2017,; Dalberto & Banegas (eds.), supra note 52. In South Africa, the introduction of biometric technologies into welfare payment systems resulted in one company’s disastrous monopoly while weakening the government’s power to maintain any control over the welfare system.5050 See e.g., Keith Breckenridge, The Global Ambitions of the Biometric Anti-Bank: Net1, Lockin and the Technologies of African Financialisation, 33 INT’L REV. OF APP. ECON. 93–118, 2019, financialization; Robyn Foley & Mark Swilling, How One Word Can Change the Game: Case Study of State Capture and the South African Social Security Agency, Stellenbosch: State Capacity Research Project, 2018, of-state-capture-and-the-south-african-social-security-agency-sassa/.  Use of biometrics can thus augment powerful market-based interests that do not reflect human rights and democratic principles.5151 Amba Kak, ed., Regulating Biometrics: Global Approaches and Urgent Questions, 2020, 

III. Comparative efforts to mitigate the exclusionary impact of biometric identification 

While a data protection and privacy framework should be seen as a necessary condition to safeguard human rights in the context of biometrics, such measures are not sufficient to combat broader effects. Regulatory efforts that fail to include specific remedies for exclusion nor accessible accountability mechanisms render it extremely difficult to safeguard rights. For instance, India’s Aadhaar Act stipulates that children shall not be denied access to any subsidy, benefit, or service as a result of failed biometric authentication, but does not provide any specific cause of action or remedy.5252 Id. Most efforts to regulate the use of biometrics—and AI more broadly—have also failed to adequately engage affected communities in a meaningful, continuous way.5353 Christopher Wilson, Public Engagement and AI: A Values Analysis of National Strategies, GOV’T INFO. Q. 101652, 2021,  In Ireland, this was a core complaint about the expansion of the PSC’s scope. 

Blocked by the lack of remedies, civil society organizations have resorted to litigation to challenge biometric identification systems. A series of such court cases highlights impacts on equality, dignity, autonomy, health, and social security, and demonstrates some ways in which legal frameworks and norms can be applied to biometric technologies.5454 See Nubian Rights Forum, supra note 31; Justice K.S. Puttaswamy (Retd.) and Anr. vs. Union of India and Ors. Writ Petition (Civil) No. 494 of 2012 and Connected Matters [India] (26 September 2018); Press Release: Civil Society Drags Government to Court Over Requirement to Have National ID Card Before Receiving Covid-19 Vaccine (2021), However, litigation is not an ideal mechanism, and challenges within litigation reflect broader difficulties regulating AI.5555 See, e.g., Reetika Khera, “The poor are left to themselves,” THE HINDU, Sept. 28, 2018, themselves/article25074493.ece, last visited Jan. 14, 2022. For instance, biometric identification projects often involve proprietary technology and are implemented quickly and with little transparency; litigants therefore face significant barriers to accessing information necessary to challenge these systems. Judicial timelines also mean that harms may continue, and often replicate and deepen, while awaiting review. 

Pushback from civil society and affected communities has also demonstrated the limitations of a purely individual rights framework that does not sufficiently recognize disparate impact; many of the impacts of biometric technology are structural, dispersed, and affect groups collectively. For instance, the legal challenge to the national ID system in Kenya required individual plaintiffs to show that they, as a member of a particular group, had been directly disadvantaged through the disparate impacts of biometric technologies.5656 Amnesty International, Ban the Scan NYC,, last visited Jan 13, 2022. See also Section II. Similar issues have emerged in the United States,5757 Mutale Nkonde, Automated Anti-Blackness: Facial Recognition in Brooklyn, New York, HARV. KENNEDY SCH. J. AFR. AMER. POL., 2019–20.; Lola Fadulu, Facial Recognition Technology in Public Housing Prompts Backlash, N.Y. TIMES, Sept. 24, 2019, where victims of biased surveillance systems are left without constitutional protections.5858 United States v. Tuggle, 4 F. 4th 505, 513 (7th Cir. 2021). Thus, it is crucial to establish definitions of group harms and indirect discrimination, as well as evidentiary standards for demonstrating disparate impact. 

Each application of biometric technology deserves its own legal assessment of harm, as well as of its legitimacy, necessity, and proportionality. However, some have concluded that, on the evidence, such technologies pose such serious risks to human rights and democracy that the potential benefits are outweighed, necessitating a ban on the sale and use of these technologies.5959 UN High Commissioner for Human Rights, The Right to Privacy in the Digital Age, Sept. 13, 2021, A/HRC/48/31,; Amnesty International and more than 170 organisations call for a ban on biometric surveillance, June 7, 2021, international-and-more-than-170-organisations-call-for-a-ban-on-biometric-surveillance/. Any steps taken by the U.S. government should seriously consider the gravity of these concerns. 

IV. An international and comparative perspective is also necessary to reflect the global environment in which such technologies are being developed, used, and regulated 

The United States plays a major role in the development and uptake of biometric technologies globally, through foreign investment, foreign policy, and development aid, as well as the activities of U.S. companies. The U.S. government has participated in mandating creation of biometric identification systems, such as through UN Security Council Resolution 2396, requiring states to “implement systems to collect biometric data” in order to “properly identify terrorists.”6060 United Nations Security Council (UNSC) Res. 2396, Dec. 21, 2017, UN Doc S/RES/2396, See also Krisztina Huszti-Orbán & Fionnuala Ní Aoláin, Use of Biometric Data to Identify Terrorists: Best Practice or Risky Business? 2020, USAID provides active support for foreign governments’ collection of biometric data, while the World Bank finances the development of biometric systems in dozens of countries.6161 US Agency for International Development (USAID), Introducing Biometric Data at Refugee Settlements in Uganda, 2019, information/videos/introducing-biometric-data-refugee-settlements-uganda; USAID, Good Governance & Public Administration Strengthening Project (GGPAS), 2021,; USAID pilots biometrics to track youth health in Kenya, Identity Week 2015, U.S. government actors and companies influence critical decisions in standard setting bodies about specifications for biometric data collection devices and biometric data analysis.6262 Joseph N. Pato and Lynette I. Millett, The Biometrics Standards Landscape, (National Research Council (US) Whither Biometrics Committee, 2010, Further, the Taliban’s seizure of U.S. military biometric devices and data in Afghanistan demonstrates the immense ramifications of U.S. actions abroad.6363 Ken Klippenstein & Sara Sirota, The Taliban Have Seized U.S. Military Biometrics Devices, INTERCEPT, Aug. 18, 2021,; Eileen Guo & Hikmat Noori, This is the Real Story of the Afghan Biometric Databases Abandoned to the Taliban, MIT TECHNOLOGY REVIEW, Aug. 30, 2021, biometric-databases-us-military-40-data-points/; Verónica Arroyo & Donna Wentworth, We Need to Talk About Digital ID: Why the World Bank Must Recognize the Harm in Afghanistan and Beyond, ACCESS NOW, Oct. 14, 2021, The widespread use of biometric recognition at entry points at the Mexico border6464 See, e.g., UN Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance, Racial and Xenophobic Discrimination, Emerging Digital Technologies, and Border and Immigration Enforcement, 2020, UN Doc A/75/590, para. 47, content/uploads/2020/11/A_75_590_Advance-Unedited-Version.pdf. Immigrant Defense Project et al., Factsheet: Freeze Expansion of the Hart Defense, Apr. 2021,; Todd Miller, More than a Wall, 2, 2019, further influences other governments around the world to follow suit.6565 Petra Molnar, Technological Testing Grounds: Migration Management Experiments and Reflections from the Ground Up, 2020.

Meanwhile, the United States is one of the largest exporters of biometric surveillance technologies.6666 Steve Feldstein, The Global Expansion of AI Surveillance, 2019,; Valentin Weber and Vasilis Ververis, China’s Surveillance State: A Global Project, 2021,; Liza Lin & Josh Chin, U.S. Tech Companies Prop Up China’s Vast Surveillance Network, WALL ST. J., Nov. 26, 2019, tech-companies-prop-up-chinas-vast-surveillance-network-11574786846; U.S. company L1 Identity Solutions was instrumental in the introduction of India’s Aadhaar system, for example;6767 Unique Identification Authority of India, Device Drivers, and U.S. companies such as Apple have also normalized the everyday use of biometric authentication.6868 Face Biometrics Month: The Apple Effect and the Mainstreaming of Face Authentication, FindBiometrics, 2019, biometrics-month-the-mainstreaming-of-face-authentication-611140/. These companies have been largely unfettered by legal or regulatory constraints in their experimentation with biometrics.6969 Kate Crawford et al., AI Now 2019 Report, 2019, In large part, such initiatives have been paused only after public backlash and coordinated advocacy have forced companies to change course.7070 See Rebecca Heilweil, Big tech companies back away from selling facial recognition to police. That’s progress., VOX, June 10, 2020, Meta’s recent decision to shut down its facial recognition system and delete facial templates was explicitly driven by “societal concerns,”7171 Jerome Pesenti, An Update On Our Use of Face Recognition, META, Nov. 2021, but this came after Meta had been unconstrained in creating a database of over one billion faces; the company retains its DeepFace software and can resume use at any point.7272 Rebecca Heilweil, Facebook is backing away from facial recognition. Meta isn’t. Vox, Nov. 3, 2021, facial-recognition-meta. Further, existing models of self-regulation are insufficient and do not provide meaningful constraints on the development and deployment of biometric technologies.7373 See Ben Wagner, Ethics As An Escape From Regulation: From “Ethics-Washing” To Ethics-Stopping? in Emre Bayamlioğlu et al. (eds.), Being Profiled: Cogitas Ergo Sum: 10 years of Profiling the European Citizen, 2018, sum.pdf. 

Reticence in constraining U.S. technology companies’ advancements has been driven by a dominant narrative of an “AI arms race” with China.7474 See Crawford et al., supra note 69; Daniel F. Runde, Romina Bandura, & Sundar Ramanujam, The United States Has an Opportunity to Lead in Digital Development, 2021,; Amanda Macias & Kayla Tausche, U.S. Needs to Work with Europe to Slow China’s Innovation rate, Raimondo says, CNBC, Sept. 28, 2021, europe-to-slow-chinas-innovation-rate-raimondo-says.html. The National Security Commission on Artificial Intelligence (NSCAI) notes that China is setting a “chilling precedent.”7575 National Security Commission on Artificial Intelligence [NSCAI], Final Report, 2021, Digital-1.pdf. Indeed, shocking reports detail the Chinese State’s use of biometrics to facilitate surveillance and persecution of Uyghurs in Xinjiang.7676 See Maya Wang, The Robots Are Watching Us, Human Rights Watch, 2020,; Olivia Shen, AI Dreams and Authoritarian Nightmares, in Jane Golley et al. (eds.), China Story Yearbook: China Dreams, 2020.

Yet U.S. government officials lament that technology companies in China can develop AI aided by unconstrained biometric data collection, claiming it is “not a level playing field.”7777 See Macias & Tausche, supra note 74 . This furthers the idea that “global AI leadership” requires low regulation, private sector access to troves of personal data, and expansive security use.7878 Crawford et al., supra note 69. The NSCAI urges that the United States “must win the AI competition”7979 NSCAI, supra note 75. and identifies, somewhat uncritically, “surveillance,” “clearing of regulatory barriers,” and “enormous government stores of data” as factors enabling China “to leap ahead.”8080 National Security Commission on Artificial Intelligence [NSCAI], Chinese Tech Landscape Overview: NSCAI Presentation, May 2019, See also Ryan Fedasiuk, Chinese Perspectives on AI and Future Military Capabilities, (Center for Security and Emerging Technology, 2020). Viewing the development of AI enabled biometric technologies through this competitive, national security paradigm risks that law, regulation, and human rights are sacrificed in efforts to “win.”8181 Crawford et al., supra note 69; Kelsey Piper, Why an AI Arms Race with China Would be Bad for Humanity, VOX, Aug. 10, 2019, The U.S. government must not allow a perceived AI arms race to dictate its approach to regulating biometric technologies. 

Further, an arms race narrative simplifies complex realities around regulation in China itself.8282 Maya Wang, China’s Techno-Authoritarianism Has Gone Global, FOREIGN AFFAIRS, Apr. 8 2021, 04-08/chinas-techno-authoritarianism-has-gone-global. Growing public controversy around facial recognition, combined with tensions with Chinese Big Tech companies, have led the Chinese government to introduce regulations, including regarding the use of biometric technologies.83

83 China Rebukes 43 Apps including Tencent’s WeChat for Breaking Data Transfer Rules, REUTERS, Aug. 18, 2021,; Josh Horwitz, China Steps up Tech Scrutiny with Rules over Unfair Competition, Critical Data, REUTERS, Aug. 17, 2021, draft-rules-banning-unfair-competition-internet-sector-2021-08-17
The Supreme People’s Court of China has issued regulations requiring companies to obtain consent before collecting and processing facial biometric data.8484 Supreme People’s Court of China, Provisions on Relevant Issues on the Application of Laws in Hearing Civil Cases Related to the Application of Facial Recognition Technology in Processing Personal Information, July 28, 2021. See also Ananaya Agrawal, China Supreme Court Issues Regulations Against Misuse of Facial Recognition Technology, JURIST, Aug. 2021, misuse-of-facial-recognition-technology/. China’s recent Personal Information Protection Law mandates data minimization and user consent across the private sector when processing “sensitive personal information” including biometric data. China appears to be taking seriously the need to regulate biometric technologies. 

Meanwhile, the European Union (EU) is claiming a leadership role in regulating biometric technologies and protecting human rights. For instance, the EU seeks to prohibit outright some uses of mass biometric surveillance by law enforcement.8585 EU Member States are also taking steps to curb biometric technologies. See Koalitionsvertrag Zwischen SPD, Bündnis 90/Die Grüne, und FDP, Mehr Fortschritt Wagen: Bündnis für Freiheit, Gerechtigkeit und Nachhaltigkeit, 2021, GRUENE-FDP-2021-2025.pdf.

The United States should view such attempts not as a ceiling, but rather a challenge to set standards even higher. Indeed, the EU’s proposed AI Act has been critiqued for its overly-broad exceptions; unnecessarily restricting prohibition of remote biometric identification to law enforcement; and applying prohibitions only to “real-time” uses rather than continuing or post-hoc uses.8686 See An EU Artificial Intelligence Act for Fundamental Rights: A Civil Society Statement, Nov. 30, 2021, content/uploads/2021/12/Political-statement-on-AI-Act.pdf [Hereinafter EU Civil Society Statement]; Nathalie A. Smuha et al., How the EU Can Achieve Legally Trustworthy AI: A Response to the European Commission’s Proposal for an Artificial Intelligence Act (2021),; Michael Veale & Frederik Zuiderveen Borgesius, Demystifying the Draft EU Artificial Intelligence Act. 22 COMP. L. REV. INT., 2021, 97, Further, the EU’s proposed Act gives providers significant discretion to assess the risks of their own technologies;8787 Smuha et al., Id. it also fails to confer individual rights to those impacted by AI systems, or to provide for effective remedies where harms occur.8888 See EU Civil Society Statement, supra note 86; Smuha et al., supra note 86. We encourage OSTP to look to these parallel efforts and strive to go further still. 

The U.S. government must also take account of the far-reaching impacts that its decisions and regulation of U.S. companies already have worldwide: the extraterritorial application of technologies developed, produced, sold, and promoted by U.S. government agencies and U.S. corporations must come into the remit of the AI Bill of Rights. 

V. Recommendations 

The outcome of this RFI and the AI Bill of Rights should be a comprehensive governance framework, including relevant laws, policies, and plans for implementation, which emphasizes human rights, regulatory oversight, and effective enforcement. In order to achieve this, OSTP should therefore work towards the following recommendations: 

  1. Impose an immediate moratorium for critical sectors: Define, classify, and enact a moratorium on the use of mandatory AI-enabled biometric identification technology.8989 Facial Recognition and Biometric Technology Moratorium Act of 2021, S. 2052, 117th Cong., 2021, bill/2052?q=%7B%22search%22%3A%5B%22Facial+Recognition+and+Biometric+Technology+Moratorium+Act+of+2021%22%5D%7D&s=1&r=1. Such identification systems should never be mandatory in critical sectors such as education, welfare benefits programs, and health care, so as to preserve access to fundamental services. 

  2. Invoke legal action to address the indirect and disparate impact of biometrics: Propose and enact legislation that unequivocally applies the disparate impact doctrine, at a minimum in federal equal protection claims regarding the design and use of AI-enabled biometric identification technologies, encompassing their implementation in administering access to public and private services. Such legislation should be designated implementing legislation in line with the ratification of the CERD, affording a private right of action for racially discriminatory effects of the deployment of AI-enabled technologies. 

  3. Engage in further review of the human rights impact of biometrics and the components of different legal and regulatory approaches. This should include, inter alia:

    a. Conduct and make public a comprehensive mapping of all federal systems currently or prospectively using biometric identification, including (1) the kinds of information collected, (2) the legal authority for collection and retention, (3) the purposes for which information is used, (4) how the information flows within public agencies, and (5) the impact of collection, retention, and sharing on rights.

    b. Conduct a comprehensive analysis of other countries’ and regional bodies’ efforts to develop binding legal frameworks to regulate AI-enabled biometric technologies. Distilling key essons, the U.S. government should go beyond minimal standards to progress the field towards greater recognition and protection of human rights. 

  4. Build a comprehensive legal and regulatory approach that addresses the complex, systemic concerns raised by AI-enabled biometric identification technologies, including:

    a. Commit to adoption of AI-enabled biometrics within administrative agency operations only to the extent that adoption demonstrably furthers the justification for delegated authority. Subject such adoption and use to regular oversight and review. 

    b. Establish clear safeguards for experimentation with these technologies, including but not limited to mandating rights-based impact assessments before a biometric technology can be piloted by the government or the private sector, and requiring a high level of justification, as well as suitable precautions, when such technologies are deployed first on marginalized groups such as migrants or welfare benefit recipients. 

    c. Address both (and distinguish between) public and private use, individual and group rights, and domestic and international use and data-sharing. 

    d. Place meaningful constraints on actions taken abroad. This includes U.S. companies’ operations abroad with regard to marketing, sale, or transfer of biometric data and technologies, as well as the U.S. government’s actions in spheres including, but not limited to, international development, counterterrorism, defense, and migration.

  5. Ensure that any new laws, regulations, and policies are subject to a democratic, transparent, and open process. This should include, inter alia: 

    a. Hold further consultations, proactive outreach to affected communities, and engagement outside of the United States. 

    b. Ensure that public education materials and any laws, regulations and policies should be described and written in clear, non-technical, and easily accessible language.



Similar Posts