Hello again everyone, I hope y’all have had a chance to enjoy the summer and take a break from one of the most perplexing years in modern history (recent variants notwithstanding).
It’s been some time since I’ve written here. As I’ve mentioned in my previous issues, work-life ramped up pretty quickly for me over the past six months, and, well, I got a little burned out. That said though, I’m back and am planning to pick up the pace with the newsletter, regualry releasing a mix of long and shorter form content. You can also expect to see more integration here between The State of Surveillance and my sister podcast, The Future is Ow. After about 15 episodes or so, I notice there’s an incredible amount of parallel there that think all of y’all would find beneficial. You can check that podcast out and subscribe here.
One final programming note before we get started. In an effort to avoid sinking into the pitfalls of burnout once again I’ve decided to take some time and revaluate what makes the cut for this newsletter. In doing so I tried to grapple philosophically with what the word “surveillance,” means to me. In the past, I’ve taken this to apply to invasive tech, troubling legislation, and dystopic examples of corporate overreach. Those all certainly fit the bill and they will continue to feature prominently here.
However, surveillance is bigger than that. To me, surveillance is also synonymous with the removal of autonomy. If an individual in a society is made to feel powerless to the point of inaction or silenced to the point of submission, then I think that too is a form of surveillance, whether or not a piece of what we would call advanced technology was ever imposed. With that in mind, there are many flavors of daily life that, under this broader interpretation, are worthy of exploring here. This could include authoritarian government leaders, escalating political polarization, the rise of misinformation and the collapse of public trust, anti-competitive practices by wanna-be monopolist companies, new tech that seeks to quell human autonomy, or maybe something else entirely.
On a practical level, this broader focus will allow me to write with more vigor, enthusiasm, and intrigue. Generally, though, I think it will make this space an interesting corner of the web for thinking critically about what it means to be surveilled, and maybe more importantly, what you can do about it.
And with all that out the way, let’s jump back into it with a story I think perfectly incapsulates many of the points outlined above.
Today’s Big Story: Apple's Child Abuse Privacy Inflection Point
In a sense, Apple’s recent privacy debacle surrounding the company’s decision to add new child sexual abuse media (CSAM for short) scanner to its products began a long time ago. Though the news made a splash last week, the features were reportedly being designed for months. These new features, which as we’re about to discuss below may fundamentally move the goal post on privacy writ large, may have skirted by largely unnoticed if a John Hopkins professor of cryptography had not leaked the details. This practice of rollout first and ask for forgiveness later is par for the course for Apple according to coworkers and experts I’ve spoken to over the past week.
Explaining Apple’s new CSAM features:
Let’s back up for a moment and explain exactly what’s caused the current privacy uproar. Last week, Matthew Green, the aforementioned professor, took to Twitter to sound the alarm over a new image scanning tool Apple was preparing to roll out called neuralMatch.
The aim of neuraMatch is to scan images Apple users upload to iCloud, before they ever get to iCloud, to determine if they are uploading child pornography. Though other major cloud services already scan for child sexual abuse and other harmful content, Apple’s software works differently, scanning images locally on a user’s device before it ever reaches iCloud. In this way, Apple can supposedly maintain its stance on privacy by saying, unlikely other tech companies, that it does not scan its cloud service. But, as is becoming clear, scanning images on a user’s device before it reaches iCloud is a difference without a distinction. Many iPhone users, myself included, regularly have their images offloaded to iCloud without ever explicitly uploading those items themselves.
First, some more on how nueraMatch works. According to Apple, the feature uses a hashing technology called NeuralHash to scan images on a user’s device and cross-reference them against a database of known CSAM files maintained by the National Center for Missing & Exploited Children, also known as the NCMEC.
If enough similarities between the hashes are flagged and if a certain threshold is met (one image isn’t enough for example,) the user’s data is then reviewed by an Apple employee and passed along to law enforcement if necessary.
Almost instantly, neuralMatch drew the fierce condemnation of security and privacy activists who worried the feature amounted to a backdoor to encryption. Others, fearful of government surveillance, worried Apple could expand its hashing tech to scan beyond child pornography to other less clear-cut cases like terrorism, protests, or other actions deemed deplorable by officials holding positions of power.
Apple also revealed another new feature aimed at children under 12 using iMessage family accounts. Here, Apple uses machine learning to scan every image sent or received by a child user and if determined to include sexually explicit material, an alert will be sent to the account owner (usually the child’s parent). This feature immediately drew the ire of activists who warned such a feature could end up outing queer or transgender kids, or worse, provoke the wrath of an abusive family member.
As mentioned above, while Apple confirmed plans to roll out the features for iOS 15 and macOS Monterey in coming months, it did so only after leaks and media reports made the controversial new initiatives public. Apple, the supposed bastion of privacy and free expression among a tech industry run by surveillance-hungry despots, also chose to roll this out without consulting advocates, academics, or any other concerned member of civil society.
Stirring up the Hornet’s Nest
Condemnation of Apple’s unilateral privacy pivot came swiftly and from a diverse crowd of commenters. The Electronic Frontier, normally a supporter of Apple’s privacy-preserving efforts, described Apple’s move as an “about-face” on user privacy and accused Apple of opening up a backdoor for law enforcement. In a blog post, the EFF wrote, “at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor.” Well said.
Edward Snowden, the whistleblower who blew the lid on the NSA’s domestic spying program, took to Twitter and warned Apple could potentially expand its search capabilities beyond child sexual abuse content. In blog post, Cambridge University security engineering professor Ross Anderson warned Apple’s scanning feature could be used by authoritarian governments to entrap dissidents speaking ill of their leaders. Others, like the original leaker Matthew Green, expressed concern over Apple’s use of problematic child abuse hashes, which are pulled from a database inaccessible to the public, leaving third parties powerless to review Apple’s decision.
Apple’s Attempt at Damage Control
In a memo leaked immediately after news of the features were made public, Apple software vice president Sebastian Marineau-Mes tried to dispel these concerns as “misunderstandings” and said Apple tools were bing rolled out with privacy in mind.
That didn’t satisfy critics, leading Apple to quickly roll out several FAQ’s responding to concerns. In those FAQ’s, Apple rejected privacy and security advocates’ claims that its recently revealed neuralMatch CSAM image scanning tool weakens encryption and creates a backdoor for government surveillance and said it would refuse demands from governments to add non-CSAM images to its scanning tool.
But as many have pointed out since, it doesn’t really matter if Apple says they won’t expand the filter beyond CSAM. As soon as a government passes legislation requiring Apple to hand over non-CSAM data, Apple will legally have no choice but to comply. This is an important shift. Previously (as was the case when Apple refused to give into FBI requests to decrypt the device of the San Bernardino shooter’s phone in 2015) Apple could say it did not possess the technical ability to do so. Now it does. As soon as Apple rolls out its new update, the company will possess the ability to scan the images of user’s photos locally on their devices. Right now that’s limited to CSAM material, but a simple update could expand this to add the hashes of any other material a government deems necessary.
It’s worth noting here that Apple, who has painstakingly developed a reputation as the vanguard of consumer privacy and a bulwark against government anti-encryption efforts, has succumbed to such requests in the past. Previously Apple has shipped phones without FaceTime in certain countries and has conceded to Chinese government requests to take down controversial apps and store user iCloud data and encryption keys on Chinese servers rather than the US.
CSAM: A Harbinger of What’s to Come?
Apple’s new features don’t exist in a vacuum. Governments all around the world have stepped up efforts to weaken encryption in recent years.
In 2018, Australia became the first western country to pass legislation allowing law environment to require tech companies to hand over encrypted users data.
Then, late last year, the European Union adopted a resolution on encryption that critics warn could encourage companies to provide law enforcement with encrypted user data.
Meanwhile, last year a bipartisan group of US senators introduced the EARN IT Act, a bill aimed at combating child exploitation that activists warned would weaken encryption. Though the bill failed, the proposal reignited US calls to limit encryption.
In general, governments and law enforcement eager to encroach on encryption have seen a convenient cudgel in child pornography. Knowing that consumers and politicians don’t want to be seen seemingly opposing efforts to reduce child pornography, these actors use the issue as a gateway into advocating for incremental encryption exceptions. But as the EFF and others have pointed out, encryption is a take it or leave it concept. An individual’s device is either secure, or it isn’t. Offering even slight exceptions to encryption breaks the security effort at its very foundation.
What now?
The last two weeks have been a whirlwind for people who care about personal privacy encryption, and government surveillance. It would be easy to dismiss the new Apple news as a minor product change limited to Apple customers, but that would be a mistake. For years, Apple has stood apart as one of the few major companies willing to take a firm stand against a powerful government in favor of individual autonomy.
Psychologically, the fact that Apple, the world’s most valuable company by market cap is willing to hold firm on this position meant that other smaller firms could see privacy, even controversial privacy, as a business positive. If Apple chooses to move forward with these measures, they will have entered new territory, opening themselves up to the auspices of governments and making themselves essentially no different than any other large tech firm that regularly exchanges user personal information with governments as freely as a gust of wind. The impact this has on everyday users’ psyches may well indeed be profound.
There is, of course still time. Apple could choose to slam the brakes on these new set of features and in turn, cling onto thier reputation as a bastion of privacy.
Unfortunately though, given their flippant, and at times hostile response to reporters and activists questioning them on the matter, that seems more and more unlikely. As is the case with so many critical issues, the future of millions of people is in the hands of multi-trillion-dollar corporations.
May they have mercy.
Here’s What Else is New
New York recently became the first major US city to include vaccine passes in part of their vaccination mandates.
As we’ve discussed previously in this newsletter, data from similar vaccine passports in countries like Singapore have been used against individuals in criminal proceedings.
In addition, New York State’s Excelsior Pass is reportedly being expanded to serve as a broader digital ID, not limited just to Coivd-19.
🥽 Facebook wants to build “the metaverse”
The company has officially created a new product unit aimed at creating a 3D social space using virtual and augmented reality tech.
The metaverse is a concept derived from science fiction, and refers to a shared virtual space where users live out large portions of their lives.
The timing: Facebook’s grand metaverse proclamation drops at the same time as regulatory pressure on the company is reaching a boiling point. Facebook is running out of new users for almost all of its apps … this big push is arguably an act of survival.
Russia announced it has successfully run tests that will allow it to separate itself from the global open internet.
In practice though, it looks like Russia is creating its own local internet to access the larger global internet
The bigger picture: Russia has spent many years creating its alternative internet but it appears to want to have its cake and eat it too: it wants to operate totally independent and have more control but still be able to access the global internet and the content and services that provide. Those two visions appear antithetical to one another at the moment.
⚠️ Citizen, the vigilante neighborhood app, is hiring people to livestream crimes and emergencies ⚠️
A job posting discovered by The Verge showed Citizen was prepared to pay some users $200-$250 per day to record events and interview police and witnesses.
As my friend and podcast co-host Jonah recently put it, Citizen is basically paying people to become Jake Gyllenhaal from Nightcrawler.
And that’s all folks. See you all soon!