Last Updated: November 23, 2021
Prompted by the police brutality protests in June 2020, tech giant Amazon decided to temporarily halt the use of its face recognition software which, until then, had been used by many US government agencies, including the police. However, as the one-year anniversary of this moratorium is coming up, Amazon announced an indefinite extension on May 17.
Rekognition, Amazon’s cloud-based platform, was launched in 2016 and has since been subject to criticism regarding racial and gender bias. Over the years, there have been reports that Rekognition has a lower accuracy of identifying women and people of color, resulting in multiple cases of wrongful arrests (mostly Black men).
So, when George Floyd, a Black man, died due to police brutality, the BLM movement was reignited along with the police abolition movement, leaving Amazon to put a one-year pause on the police use of Rekognition.
The service will remain available to organizations searching for human trafficking victims as well as other customers—the NFL, C-SPAN, CBS, and others.
Calls for Permanent Ban of Amazon’s Software
Amazon’s extension was met with mixed reactions. Even though activists are happy the software isn’t going back to police use yet, their previous concerns still stand.
Apart from facilitating false convictions, Rekognition (and other face recognition services) is considered by some to be a threat to privacy since it enables increased government surveillance. Face recognition can also be viewed as unethical and discriminatory.
Amazon isn’t the only one involved; companies such as Microsoft and IBM have also developed their own face recognition software. However, like Amazon, these companies have recently been showing a tendency of detachment from the sector. Microsoft has stopped the sale of its face recognition software to the police, and IBM has abandoned it altogether, citing possible abuse of the software.
A possible explanation of Amazon’s extended moratorium is the lack of regulation for the face recognition sector. In its statement announcing the moratorium last year, Amazon said that “governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge.”
A whole year has passed since then, and barely anything has changed. Other companies have expressed similar sentiments as Amazon, but the issue is ongoing.
What Can Make Face Recognition Software Dangerous?
Cameras are employed in numerous settings – on phones, laptops, stores, security systems (which we wrote about before), cars, and even on the streets. Facial recognition software keeps a database of all the faces it has collected. How that collection is used is determined by the company that owns the software, and anyone can buy facial recognition software.
If an organization or company has malicious intent, it can use this technology to unearth more of personal data (photos, social media, internet behavior, etc.) and use it for shady practices such as political profiling, location tracking, or even identity theft, which is becoming a widespread problem, as numerous companies that offer identity protection services show.
Most of these companies store data without the individuals’ consent, especially when the data is obtained from streets, shops, or offices while people are going about their daily lives.