Hi again everyone. This issue of the newsletter was supposed to come out last Friday, but due to some developments that occurred over the weekend related to the main topic, I decided to hold off on publishing in order to provide a more well-rounded commentary. Look out for another issue later this week!
As always please, subscribe and share this newsletter with whomever you can. To all the new subscribers who joined after reading my last post on the protests, welcome.
Alright then, let’s get started.
Today’s Big Story
Is Facial Recognition Dead?
As the dust settles and the hordes of chanting protesters slowly disentangle themselves from the streets of major cities, the world is preparing to enter a new, and pivotal stage of a historic demonstration. Now, post burning Target’s and crowd filled bridges is when legislators and societal leaders are tasked with the job of implementing codified, long-lasting change through laws and reform. It’s this part of the “protest” movement that has for so long alienated people like myself who exist within the “younger generation.”
We are a generation (this protest consisted primarily of young people) who have become accustomed to grand proclamations of solidarity with little result. Occupy Wall Street, the 2014 Black Lives Matter protests, and the women’s march all happened within my lifetime, and yet none of them produced any meaningful long term reform.
Like before, this time feels different. For those following the news, one of the first big examples of this supposed change in action seems to have come from an unlikely place — Silicon Valley. In the span of less than half a week, three of the nation’s most prominent investors in facial recognition technology all announced they would cease selling their products to police. The explicit and assumed presumption underlying all of their decisions was that their sudden decision was a direct consequence of two weeks worth of protestors’ boots paving a way for change. So with so much apparent mounting opposition to the technology, it seems only fair to ask: are we witnessing the death of facial recognition as we know it?
First, a very quick and far from comprehensive overview of why facial recognition is an issue. The identification technology itself relies on an algorithm of code to teach it how to correctly identify faces. Like all of these technologies, the accuracy and effectiveness of facial recognition depend on the quality and diversity of the data it’s fed. As numerous researchers have shown, for much of the early years of facial recognition, this data mainly consisted of young, white faces, often from university classes. That data misalignment means that when these technologies go live, they have difficulty accurately detecting people who aren’t, well, young, white, and male. In particular, as MIT researchers found several years ago, facial recognition tends to have the most difficulty accurately detecting black women.
All this means that at the very least, facial recognition as a whole suffers from scientific systematic bias. Activists take this further and claim the technology is racist. These concerns move from the theoretical into the gravely serious once law enforcement gets involved. For years, local police departments, Customs and Border Patrol agents, and other federal US agencies have all relied on facial recognition to identify and surveil people.
Facial recognition technology isn’t itself entirely new, but thanks to an explosion of online data over the past decade, fed in large part to the preponderance of smartphones and other mobile devices, technology companies have rapidly expanded its use. What was once a theoretical concept quickly became an everyday standard. Smartphone users started using facial recognition to unlock their phones and apartments in China began using it to replace locks. A couple of years ago, I even wrote about how we use facial recognition to surveil animals of every stripe.
Eventually, law enforcement around the world saw the utility of the technology as well and, started contacting Amazon, Microsoft, and many others to procure their services. All of that happened before courts in the US could ever create any meaningful regulations or boundaries for how the technology could be used. If the 2018 Mark Zuckerberg testimony in front of the senate is any guide, it’s safe to say many of them. probably didn’t have a clue how the technology worked.
Suffice it to say that for the past few years, facial recognition has existed in the context of a regulatory wild west, which has made it one of, if not the foremost issue among privacy experts. Some worried parties, including the city of San Francisco, went as far as to ban it completely. For the most part, though, the major tech companies have denied or largely ignored claims of algorithmic bias and have continued on with their business unfettered.
It’s in all that context why the news last week came as such a shock. On Tuesday, while I was sorting through a jumble of protest news and setting aside stories for this newsletter, I saw ran across a headline, somewhat buried amidst the torrent. It read:
Huh, interesting I thought. After I quick read I learned that IBM, one of the world's largest and oldest computing companies released the reigns of its facial recognition endeavors. The announcement, which came from Arvind Krishna, the company’s CEO, came during the second week of protests of police brutality and racial inequality in the US. This tension, according to the CEO, played a role in the company’s decision.
During the Congressional hearing, Krishna said he opposed technologies that contribute to, “mass surveillance, racial profiling, [and] violations of basic human rights and freedom.”While IBM is still one of the largest computing companies in the world (and has multiple major contracts with the US military) it’s not the first name that comes to mind when one thinks of facial recognition. The news represented a welcome adendum for those concerned with the unchecked algorithms, but it was far from a sea change.
Then Amazon stepped in.
The news, which came by way of a two-paragraph, non-bylined blog post on the company’s website, announced a one-year moratorium on the sale of its “Rekognition” software to police. For years, Amazon had provided its software to local and federal law enforcement. Rekognition is also one of the most controversial of the major systems. It was Amazon’s Rekognition, that back in 2018, in a test conducted by the ACLU, mistakenly identified 28 US members of Congress for convicted criminals. Those misidentified in that test were overwhelmingly minorities. (Amazon has since taken issue with the methodology of the ACLU’s test).
More recently Amazon—the nation’s largest retailer—has come under fire both for its treatment of workers during the COVID-19 crisis and for its alleged hypocrisy around racial issues. On May 31, the company posted a tweet acknowledging spirit for the “black community,” only to be pilloried by some who were quick to highlight privacy issues surrounding Ring (Amazon’s smart doorbell) and Rekognition, both of which reportedly impact the lives of minorities disproportionately.
With Rekognition and Ring, Amazon works with hundreds of law enforcement offices around the country. The company has long refuted claims that its technology is racist. I’ve written about Rekognition multiple times before, and in none of those cases has Amazon officially concedes that its technology suffers from systematic bias. So, on that end, at least their consistent.
Amazon’s moratorium announcement differed in several ways from IBM’s statement. In pure length and grandiose, the Amazon blog was sparse and to the point, just 102 words in length. But unlike IBM’s announcement which acknowledged a history of bias in facial recognition and a fear of potential harm by allowing police to use it, Amazon took a minimalist approach. By leaving the post short and devoid of any confession of wrongdoing, Amazon allowed people to read between the lines without actually admitting to the claims of racism activists have levied against them.
Then there’s reason to wonder how serious or meaningful the moratorium is on its merits. According to the blog post the company has only committed to cease its sale of facial recognition to the police for one year. That one-year hiatus also does not necessarily apply to federal law enforcement, which includes ICE and the Department of Homeland Security, some of Amazon’s most fervent clients. When asked about that rather glaring federal law enforcement loophole by multiple media organizations, Amazon refused to comment.
So, Amazon’s announcement left much to be desired but at the very least, in terms of optics, represented a substantial move towards accountability for those who believe the technology is under-regulated. Less than 24 hours after the Amazon announcement, Microsoft — one of the first architects of facial recognition — shocked the world even further by announcing they too were slamming on the brakes.
“We will not sell facial recognition tech to police in the U.S. until there is a national law in place,” Brad Smith, the company’s president announced last Thursday. “We must pursue a national law to govern facial recognition grounded in the protection of human rights.”
In an interview with NBC News, a Microsoft representative said the company does not currently sell its facial recognition tech to police departments, but, like Amazon, would not comment on its tied to federal agencies like Customs and Border Patrol and the Department of Homeland Security.
Microsoft has actively partnered with federal agents and the US military for years, and its heavy-handed self-criticism of the technology amounts to one of the strongest criticisms yet. The ACLU summarized it well in a statement they released on Twitter last week.
“When even the makers of face recognition refuse to sell this surveillance technology because it is so dangerous, lawmakers can no longer deny the threats to our rights and liberties,” the ACLU said in a statement this week.
The Microsoft decision didn't come without some backlash. Just one day after Smith’s remarks, President Trump reportedly retweeted a post from his former director of national intelligence which proposed barring Microsoft from federal contracts as punishment for their decision. While it’s difficult to tell how seriously to take one of the president’s hundreds of weekly Twitter decrees, it’s a sign at least that the decision has put the company in the executive’s rage tinted crosshairs.
In hindsight, it’s remarkable the speed with which major tech companies seemed willing to turn their backs on facial recognition. For the most part, minus a few angry conservative voices denouncing the tech companies for being “anti-police” the general public welcomed the changes and heaped praise on the companies. John Oliver even zeroed in on the topic in his recent Last Week Tonight segment.
So then, it goes without saying that facial recognition, the world’s scariest new technology, is dead right? Well, not exactly.
For starters, while the announcement from the major tech companies represents meaningful movement for those concerned with the technology, they also represent only a portion of the companies currently working on and selling the technology. By some accounts, there are at least 45 other companies currently working on what has been called the “facial recognition gold rush.” The most prevalent, and concerning of those is Clearview AI.
Clearview, which has been the topic of much discussion in this newsletter and among privacy writers in general, reportedly has a database of over three billion images scraped from social media and other public databases. By some estimates, one out of every two American adults may have their faces stored somewhere within Clearview’s system. Clearview also partners with hundreds of US police forces around the country and is fervently hoping to expand, both to law enforcement in the United States and internationally. If one is truly concerned about the real-world racial effects of biased algorithms in policing, Clearview should almost certainly be the focus of attention.
Even if one limits their scope of analysis to IBM, Amazon, and Microsoft, the so-called sea change is still lacking. While some of the companies have played lip service to the current debate over racial disparities in policing, they’ve also all united around the idea of a “Congressional solution.”
According to a recent CNN article, tech industry titans, including those at Microsoft and Amazon, want a more streamlined, universal regulatory framework around facial recognition. They also want to be the ones to make those rules.
“Some of the companies have said they want to help with crafting the legislation,” Brian Fung writes for CNN. “But that has critics of the tech industry worried. They believe companies could try to seek the moral high ground on the one hand while simultaneously using their substantial lobbying power to push for light-touch policies that benefit its financial interests.”
Matthew Guariglia, a policy analyst and the Electronic Frontier Foundation, is amongst those concerned.
"I think they can undoubtedly make more money with reformed face recognition than banned face recognition," Guariglia told CNN.
This isn’t necessarily a new development either. According to NBC, Microsoft has spent much of this year lobbying national and state governments to pass bills permitting the use of facial recognition by police. While Microsoft is amongst the leaders in tech lobbying, they are far from alone. Concerns over facial recognition companies trying to craft legislation to regulate themselves goes back at least since 2015.
While companies like Microsoft and Amazon should receive some praise for taking a stand, it’s worth remembering that their proposed solutions are a far cry from what activists have called for. Amnesty International, for example, laid out demands for a full-scale ban on facial recognition for mass surveillance.
"Black people throughout our communities already experience disproportionate abuses of privacy and basic rights, and surveillance only exacerbates the potential for abuses," Michael Kleinman, the director of Amnesty International USA’s Silicon Valley Initiative, wrote in an emailed statement. "We are seeing these violations play out daily as police departments across the United States use facial recognition technology to identify protestors.” Amnesty isn’t alone in that call.
Even if one thinks a full-on ban is unrealistic (which, for what it’s worth, is where I ten to pitch my tent) a far more reasonable approach would surely include creating distance between the regulator and those being regulated. Until that happens and there’s meaningful independent oversight, it’s impossible to say that facial recognition is anywhere approaching “dead.”
Like what you’ve read so far? If so, please consider becoming a paid subscriber for $5 per month.
If that’s too much commitment, no worries. You can also support the newsletter by making a one-time Venmo donation to @Mack-DeGeurin to help keep this content coming.
In Other News…
***Restricting Police Facial Recognition***
Democrats recently introduced the sweeping and expansive Justice in Policing Act of 2020 in response to the international outcry over police brutality.
Included in the bill is a provision that aims to limit the amount of biometric data police can gather on suspects upon their arrest.
The bill demands that all police wear body cameras but it would restrict those cameras from being embedded with facial recognition technology.
“Body cameras shall not be used to gather intelligence information based on First Amendment-protected speech,” the bill reads, “and shall not be equipped with or subjected to any real-time facial recognition technologies.”
While the bill forbids real-time facial recognition surveillance, it does allow police to run recorded data through a facial recognition algorithm at a later date, provided the law enforcement first obtains a warrant.
***Dems Take on Domestic Surveillance***
The letter claims recent surveillance practices by federal agents, “are significantly chilling the First Amendment rights of Americans.”
The letter referenced several reports of domestic surveillance over the past 10 days, including the deployment of a Predator Drone by Customs and Border Patrol over Minneapolis, the flying of Cessna planes allegedly equipped with “dirtboxes” over Washington D.C. to collect cell phone data.
In addition to the FBI, the letter specifically addresses The National Guard, Customs and Border Patrol and the Drug Enforcement Agency by name.
“Americans have a healthy fear of government surveillance that started at the founding of our country and has continued to modern times,” the legislators wrote. “Government surveillance has a chilling effect.”
You can read the full letter here.
***LAPD and NSO Group***
That’s according to law enforcement emails obtained by Motherboard.
In one of the emails, dating back to June of 2016, LAPD Detective Mark Castillo confirmed the department had received a tech demo.
“I would like to thank you again for the product demo you put on for us at LAPD headquarters," Castillo reportedly wrote in the emails,
NSO Group is a private surveillance company that gained international infamy after journalists discovered its surveillance technologies had been used by the Saudi and Mexican government to monitor political dissidents.
While there’s no evidence that the LAPD actually purchased the software, the emails are further evidence of NSO Groups’ persistent push to enter into US law enforcement markets.
As we discussed in last week’s newsletter, protests today live in an age of total surveillance
Even if a protestor is not directly surveilled by a police officer, they still being identified by algorithms searching photos uploaded to social media by other protestors.
App developers are not rushing to create technology that can both blur the faces and remove the metadata of protestors’ images to help minimize the odds of self-incrimination.
Zack Whittaker lists out some of the more interesting companies stepping up to the plate here.
Thoughts? I want to know what you think! This newsletter is a living, evolving, work and it is meant to be a helpful resource to keep you informed and engaged with the ways emerging technologies are impacting daily life. Please send all comments, questions, corrections, criticism, and hate (lemme have it) to firstname.lastname@example.org.
If you found this newsletter beneficial, you can help keep it going by sharing it online or (better yet) telling a friend about it. To help support the newsletter in more tangible ways you can make a donation of any amount to my Venmo account below. Any and all support is greatly appreciated.