The following is an excerpt from “About-Face: Examining Amazon’s Shifting Story on Facial Recognition Accuracy” by Jake Laperruqe
People shouldn’t have to worry that police are going to improperly investigate or arrest them because a poorly designed computer system misidentified them, but facial recognition surveillance could soon make that risk a reality. And, as detailed in the timeline below, over the last ten months, Amazon, a major vendor of this technology to law enforcement, has exacerbated that risk by putting out inconsistent information on the “confidence threshold,” a key means of determining the accuracy of matches produced by facial recognition systems.
It’s time to set the record straight on how improper use of confidence thresholds by law enforcement could increase the frequency of misidentification, and how Amazon’s shifting story has obfuscated the very real risks present in the technology.
The development and spread of facial recognition continue to outpace meaningful oversight of law enforcement’s use of the technology, and Congressional inquiries about misidentification risks have gone unanswered. The use of facial recognition technology by law enforcement, particularly without proper checks, presents a variety of threats to civil rights and civil liberties, including free speech, equal protection, due process, and privacy, as discussed in a recent report by The Constitution Project at the Project On Government Oversight’s (POGO) Task Force on Facial Recognition Surveillance. These threats are of immediate importance: law enforcement at the federal, state, and local levels already use facial recognition. The FBI oversees a massive program that conducts an average of over 4,000 facial recognition scans per month. As POGO reported in The Daily Beast, Amazon pitched its facial recognition technology to Immigration and Customs Enforcement last summer.
You can read the rest of the story here: About-Face