The Use of Facial Recognition Technology in Criminal Investigations
Facial recognition and surveillance technology is being used throughout the nation during criminal investigations. Find out more with our latest article written by our senior law clerk, Sarah!
IntroductionFacial recognition and surveillance technology is being used throughout the nation during criminal investigations. The technology uses a photo of someone and compares it to photos in an existing database, hoping to find a match. But how this technology goes about gathering its photos has often left the public with an eerie feeling that they are constantly being watched. Volusia County Sheriff Mike Chitwood said that his police force utilizes the technology, but prioritizes doing so in a way that is responsible. Chitwood announced his office would be entering a year-long contract with Clearview AI, a facial recognition tool, and addressed privacy concerns on Facebook, stating that police were not “scanning faces at the 500 or at the mall, or people walking down the street.” He also stated that a criminal predicate must first be set for someone’s face to be run through the software, and that it is a tool that cannot be used as the basis for arrest.
Throughout Florida, “officials from nearly every Sheriff’s Office in each county admit that some kind of facial recognition technology is in their metaphorical toolbox,” with most having access to the Face Analysis Comparison & Examination System (FACES). Through FACES, police can search over 33 million faces, including 22 million Florida driver’s license and identification photos and 11 million law enforcement photos. Furthermore, the software allows law enforcement in the state to search the FBI’s database of 24.9 million mug shots. According to The Perpetual Line-Up, a website curated by Georgetown Law’s Center on Privacy and Technology that centers on unregulated police face recognition in America, Florida police do not need reasonable suspicion to run a search. In addition, the database is searched 8,000 times per month without audits that could put a halt to potential misuse. Many offices that have adopted the Clearview AI software have implemented policies that state it can only be used if there is reasonable suspicion of a crime, however, the lack of auditing on these practices poses a significant problem. Jeramie Scott, Senior Counsel to the Electronic Privacy Information Center, expressed his concern by stating “there needs to be a public vetting with respect to the use of this technology, because it is an extremely powerful surveillance technology.” Scott also recognized that Clearview AI is a software unlike others as it includes photos taken from public social media pages in its search.
Strong OppositionClearview AI has landed in hot water by social media giants like Facebook, Instagram, Twitter, and Linkedin, who have written letters to the company asking them to stop pulling images from their sites. In addition, companies like IBM, Microsoft, and Amazon announced in 2020 that they would stop selling facial recognition technology to police departments. They cited privacy concerns, as well as racial disparities, as reasons for their refusal. Chad Marlow, from the American Civil Liberties Union, stated that multiple studies have found that use of such facial recognition technology can lead to misidentification, more often if the person is Black. Clearview AI seems to address the issue of misidentification in their contracts, with their contract with Coral Springs Police stating that the company “makes no guarantee as to the accuracy of its search-identification software [and the program] is neither designed nor intended to be used as a single-source system for establishing the identity of an individual.” However, Miami Police Assistant Chief Armando Aguilar has recognized that his office is aware of the software’s “algorithm biases, and that [they] build that into the policy to ensure that [their] officers don’t make an arrest based solely [on] recognition identification.” Instead, officers need additional evidence in the form of a witness identification or DNA in order to make an arrest.
The use of software like Clearview AI has led to states pushing to ban the use of facial recognition technology all together. In June of 2020, Boston’s city council unanimously voted to ban the use of the technology and “prohibit any city official from obtaining facial surveillance by asking for it through third parties.” Ricardo Arroyo and Michele Wu, Councilors who sponsored the bill, noted that Boston shouldn’t be using racially discriminatory technology, citing to how a Black man in Detroit was misidentified and arrested thanks to facial recognition technology. After San Francisco, Boston is the second-largest city in the world to ban facial recognition technology. That same month, a bill was proposed in the United States Senate banning federal law enforcement agencies from using facial recognition technology. This bill would limit the use of facial recognition software by private companies and require that consent of everyone involved was attained, as well as prohibit companies from selling photos or fingerprints. However, the bill does not address the use of facial recognition technology at the local law enforcement level. As local agencies throughout Florida continue to use Clearview AI and other facial recognition software, we must ask ourselves, is the reward truly worth the risk of misidentification?