Racial Bias in Facial Recognition Technology: What City Leaders Should Know

On July 1 the City of San Francisco effected a ban on facial recognition technology—the first of its kind in the nation.  Aimed at leading with transparency, accountability and equity, the ban passed as part of the city’s Stop Secret Surveillance Ordinance.  While the city stopped testing facial recognition technology in 2007 and has not been using the software in the years leading up to the ban, this legislation is significant because it expands upon action taken by other cities to require board of supervisors approval for any law enforcement or city agency use or purchase of new surveillance technologies. It is also the first ordinance of its kind to specifically address facial recognition technology, which has seen increased use and controversy in recent years.

Facial Recognition Technology Could Exacerbate Racial Injustice

Chief among the rationales presented by San Francisco Supervisor Aaron Peskin, who sponsored the legislation, was the bias baked into the way facial recognition software currently works.  Research from MIT in 2018 documents that forms of facial recognition software powered by artificial intelligence (AI) consistently make more errors when identifying women and Black people than they do for white men.

When combined with higher levels of enrollment of people of color and Black people in police databases, as a result of structural racism these differences in error rates could exacerbate racial injustice exposing historically disadvantaged populations to increased harm.

During a May 2019 House Committee on Oversight and Government Reform hearing, a witness from the Algorithmic Justice League noted that mis-identification of Black people by facial recognition has already led to false arrests. Efforts to reduce implicit bias in government workers cannot do anything to address biases once they become embedded into technologies through algorithms.

Researchers Have Raised Government Transparency and Civil Liberties Concerns

There are a variety of other issues that forays into facial recognition technology represent for cities. Highest among these is transparency about its use. This unique form of technology—essentially secret biometric surveillance—can be conducted secretly at scale and without consent, unlike any other previously vetted and authorized technologies such as fingerprinting.

During a second House Oversight Committee hearing on facial recognition this spring, lawmakers raised concerns that despite increasing federal use of the technology in airports and by law enforcement, little is known about the accuracy of the systems used by federal agencies. Current software testing on visa applications and mug shots has not required consent in the U.S. When combined with embedded biases, this lack of transparency threatens marginalized populations.

Collecting information on large groups with no connection to an investigation worries constitutional law scholars, privacy experts, and organizers. In order to identify particular individuals, Immigrations and Customs Enforcement and Customs and Border Patrol and other agencies review facial recognition data. This data, gathered through agreements with local law enforcement on thousands of individuals who may not even have criminal convictions, keeps track of family members, advocates, lawyers, community members, and any others connected with undocumented immigrants.

Local Leaders Can Consider Additional Options to Maintain Transparency

Local leaders considering the use of facial recognition and other artificial intelligence enabled surveillance can consider their options for ensuring transparency, appropriate use, and accountability and adapt them as technology changes. Short of outright bans, communities can use moratoria, develop citizen advisory commissions, clear protocols for technology use, and develop systems for public disclosure of surveillance technology use.

The City of San Francisco will not be the only local government to take action on facial recognition. The City Council of Somerville, Massachusetts voted June 27 to ban the use of facial recognition in police investigations and municipal surveillance. The cities of Berkeley and Oakland in California are considering similar bans. While some express some concern that bans are too dramatic, widespread agreement about lack of standards, accuracy, and bias prevention in facial recognition software prompt all levels of government to consider potential thorns in use of such technology.

On the other hand, the State of Maryland has disclosed little about a statewide facial recognition system used by Baltimore police on protest crowds after the killing of Freddie Gray and resulting uprising. Concerns over the lack of disclosure resulted in public pushes for moratoria on use of the software for law enforcement, and increased regulation, testing, reporting for transparency.

Other cities take a different approach, establishing transparency or advisory mechanisms to manage problems with facial recognition and other surveillance technologies deploying it in their communities. The City of Seattle established a city-wide set of data privacy principles and a Surveillance Advisory Working Group to address public concerns about the city’s use of data it gathers about residents. The City of Oakland has a similar Privacy Advisory Commission.

For tools and resources designed to help local elected leaders build safe places in communities, subscribe to the REAL newsletter.

About the Authors:

Aliza R. Wasserman is the senior associate with NLC’s Race, Equity, And Leadership (REAL) department.

 

 

 

Angelina Panettieri is the principal associate for technology and communication in Federal Advocacy at NLC. Follower her on twitter at @AngelinainDC.