Apple Walks Back CSAM Detection System...for Now

Volume III, Issue III

Hello again everyone! Just a quick reminder to check out this newsletter’s sister podcast, The Future is Ow, anywhere you get podcasts. You can check out our latest episode on Apple Podcasts here or at SoundCloud below.

Alright, with that out of the way, let’s get going!


Today’s Big Story: Apple Walks Back CSAM Detection System...for Now

Well, this is a change of pace. In a techno futurist world often seemingly hellbent on forward movement in spite of collateral damage, it’s rare to see the tides of resistance reel in the surveillance sea. Yet, that’s exactly what happened last week, though as we’ll get into, the change may represent more of an ephemeral reprieve than a codified decree. 

As discussed at length in a previous issue of this newsletter, Apple recently revealed its plans to release an image detection tool called NeuralHash which works by scanning (though Apple disagrees with the framing) images on a user’s device and cross-references them against a database of known child sexual abuse media (CSAM) files maintained by the National Center for Missing & Exploited Children. 

If enough similarities between the hashes are flagged and if a threshold of 30 images is met, the user’s data is then reviewed by an Apple employee and eventually passed along to law enforcement if necessary. This scanning impacts images being uploaded to iCloud, but the scanning itself takes place locally on a user’s device, a marked departure from previous CSAM prevention method. The effrt was the focal point of a cascading wave of backlash, including from 90 civil and digital rights organizations who called on Apple to send the tools to pasture. 

Ultimately, Apple is trying to strike some sort of middle ground by providing an alternative to scanning iCloud images in totality, but as multiple privacy advocates quickly pointed out, such a tool could potentially be co-opted by authoritarian governments around the world to scan for images other than CSAM material. And though Apple has repeatedly said it won’t expand its nueralHash tool beyond CSAM material, as soon as a government passes legislation requiring Apple to do otherwise the company will have no choice but to comply. 

Which leads us to last week. On Friday, followings weeks of push back and a volley of back and forth debates, Apple announced it would delay the feature’s rollout. Here’s Apple’s statement per CNBC: 

“Last month we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material,” the company said in a statement. “Based on feedback from customers, advocacy groups, researchers and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”

That’s a major reversal from just a few weeks ago where Apple appeared determined to ram through the tools in spite of mounting opposition. But this “reversal” really should have never had to happen in the first place. 

One of the primary concerns around Apple’s new tools had nothing to do with the technology itself, but instead with its surreptitious rollout approach. News of the features (which would place digital fingerprints of known CSAM material on every iPhone) wasn’t made widely known until a researcher risked his reputation to leak the details. What’s more, Apple, widely regarded as a bulwark for privacy amongst an increasing surveillance sympathetic tech landscape, refused to reach out to any critical member of civil society to weigh in on the update prior to launch. 

Still, the news was met with cautious optimism from privacy and security experts including former Facebook CSO Alex Stamos. “I think this is a smart move by Apple,” Stamos told Wired. “There is an incredibly complicated set of trade-offs involved in this problem and it was highly unlikely that Apple was going to figure out an optimal solution without listening to a wide variety of equities.” 

In addition to the scanner delay, Apple said it’s also pumping the brakes on an AI system that it had developed to identify explicit images sent and received by users under 13 and subsequently send an alert to the account holder (usually a parent). For context, many advocates warned this feature risked potentially outing queer or trans children or could run others afoul with abusive parents.

Though news of the delay represents a significant victory for privacy advocates, not all welcomed the decision. Child safety groups, including the National Society for the Prevention of Cruelty to Children (NSPCC) admonished Apple for giving in to pressure. 

“This is an incredibly disappointing delay,” Andy Burrows, the NSPCC’s Head of Child Safety Online Policy, told The Guardian. “Apple were on track to roll out really significant technological solutions that would undeniably make a big difference in keeping children safe from abuse online and could have set an industry standard.”

A Temporary Ceasefire

Apple’s delay is significant in that the company is finally acknowledging its refusal to listen to critics, but the debate over the technology and its rollout are far from over. In announcing its delay, Apple also implicitly confirmed its unwavering intention to move forward with the tool in some capacity. And as others have pointed out, it’s unclear how any such scanning tool could exist without fundamentally altering the dynamics of encryption and privacy as we know it.

Matthew Green, the original researcher who leaked the features, suggested Apple could limit the scanning simply to shared iCloud accounts as opposed to individuals’ devices but even that comes with its own set of privacy tradeoffs. 

Delays and conversations are of course welcome, but one is left with the lingering feeling that Apple is simply buying time, hoping to step outside the news cycle before reintroducing the features to a less acutely hostile audience. 

It’s with that in mind that the Electronic Frontier Foundation, one of the most vocal groups opposing the new tools, came out and immediately called on Apple to abandon the new tools entirely. 

“EFF is pleased Apple is now listening to the concerns,” the organization wrote, “but the company must go further than just listening, and drop its plans to put a backdoor into its encryption entirely.”

Why Now? 

While researching this story and hashing out the details with friends, I was struck by the sense that all of this felt quite unique. As a writer coming to age in wake of September 11th and amid a gushing wave of techno optimistic fervor, it often felt as if Silicon Valley behemoths and the titans of the US state security apparatus regularly passed familiar ground, both regularly escaping scrutiny and constantly heralded as benevolent caretakers of society.  

But eventually, things started to change. The wars dragged on, the body counts rose, city after city were reduced to rubble, only to have an invading force retreat nearly two decades later, leaving its experiment largely the same as it was once before. It’s no wonder then that public trust in the military has tanked to near record lows. 

In a similar vein, the lacquered pedestal Big Tech once stood upon has gradually been swept away, revealing a figure increasingly guided not by utopian visions of increasing human flourishing but by acts of transparent deception. I’d wager this unraveling ramped up with the Snowden revelations, but its continued to metastasis with regularly scheduled horrors, from the Cambridge Analytica scandal in 2018, to a Facebook-fueled genocide in Myanmar and followed up by the storming of the US capitol by mislead retrogrades. 

Said more bluntly, tech companies (and just about every other once hallowed institution) are hemorrhaging trust.

Don’t take it from me. This same point was made in a recent report from global communications firm Edelman’s which found public trust in tech companies was at its lowest point on record, both in the US and more generally in 17 of 27 markets surveyed. 

Much of this rapid decline in trust can be traced back to comprises made over user privacy. According to Edelmans, 34% of adults surveyed worldwide viewed data privacy as a social issue brands should address … and yet users are still constantly being let down. 

That has consequences. A separate survey from research firm Geneys found that misusing or abusing personal data was the top reason respondents would lose trust in a company. And last year, over half (52%) of US adults said they decided not to use a product or service based on fears how about much personal information would be collected about them, with 15% describing sharing general personal information as “problematic,” per Pew Research. 

With all these figures in place in starts to make more sense why the time was right for a major social backlash to Apple’s CSAM scanner. As one of the few companies to draw a line in the sand in favor of privacy, the reversal came as a feeling of betrayal to many. Though it seems likely Apple will still roll out this feature in some capacity in the coming months, the bigger story here that’s less talked about is just how much the global demand for privacy has shifted in recent years. If this had happened five or ten years ago, I doubt we would have seen the same level of resistance.

What that means for the future though remains to be seen. While everyday users appear to see the value of digital privacy, governments and companies all around the world are moving in the opposite direction. At some point, these two opposing forces are destined to clash, and what comes out the other side is still anyone’s guess.

BMD


Here’s What Else is New

Facebook released its Ray-Ban style “smart” glasses 

  • Users will be able to take photos and videos with two onboard 5 megapixel cameras and listen to audio through built-in speakers. 

  • The public has known about the glasses for years and during that time Facebook has repeatedly lowered the bar in terms of what to expect from their actual technical ability. 

  • These are not AR glasses in any meaningful way and to call them smart is a stretch. In reality, they are little more than a slight aesthetic improvement from Snapchat’s first pair of Spectacles released years ago. 

  • Even if the glasses amount to little more than a small GoPro, they do still present a huge privacy issue that has so far been left unanswered. Who gave Facebook the okay to return every one one of its customers into a walking CCTV camera? 

  • Lucas Matney, TechCrunch

The LAPD are regularly collecting and monitoring social media information from suspects they stop 

  • The document, obtained by the Brennan Center for Justice found the officers regularly monitor the content with little oversight. 

  • LAPD is also building a new tool called Media Sonar, “which can build detailed profiles on individuals and identify links between them.” 

  • In the past, the LAPD has reportedly used Twitter analysis to track the movements of anti-Trump protesters on May Day per the Verge.

  • Mary Pat Dwyer, The Brennan Center For Justice 

Australia’s high court ruled media sites can be held liable defamatory comments left on their site 

  • The move may force media companies and social media sites to heavily police or entirely remove public comments in order to avoid facing litigation. 

  • For a sense of the scale of the issue, David Rolph, a professor of law at the University of Sydney, said the ruling “may mean anyone who runs a social media page can theoretically be sued over disparaging comments posted by readers or random group members — even if you aren’t aware of the comment.”

  • The ruling is a reminder of what could happen if US legislators make good on their claims to want to revoke Section 230 protections. 

  • James Vincent, The Verge


That’s it for now. See y’all next week!

Why it's So Hard to Talk About Climate Change

Hey again y’all, hope everyone’s doing alright.

I’m working on putting out another regular edition of the newsletter in a few day but in the meantime, though I would let y’all know that me and my good friend Jonah just put out another episode of our podcast, The Future is Ow.

At the top of this episode, we talk about Tesla’s latest manufactured stunt (if you don’t know what I’m talking about, it requires a man in full-body spandex) the latest unexpected acquisition in the tobacco industry, and a small subset of white-collar workers who have figured out how to work two full-time remote jobs at once. It’s a doozy.

For the second section we dive deep into possibly the most “ouch” future topic there is: climate change. I spent the week reading through David Wallace Wells’ The Uninhabitable Earth as well as a cornucopia of doomer academic papers in preparation for the episode. We by no means exhaust the topic, but I think we did a pretty good job of having an honest conversation about climate change and explored why it can feel so difficult to talk about.

The issues aren’t all necessarily “surveillance” but their long term reach can all arguably have the effect of strengthening authoritarian control to some degree. At the very least, I think y’all will find the banter at least somewhat amusing. You can check out the latest episode on Apple Podcasts here or on SoundCloud above. If you find it anything less than ear-wrenching, please consider subscribing. Thanks, and see y’all soon!

Mack

On The Vaccine Passport Dilemma

Volume III, Issue II

Hello again everyone, I hope y’all are doing well! Not too much in ways of an update on this one. A quick reminder to check out this newsletter’s sister podcast, The Future is Ow, if you haven’t already. You can find that and subscribe here. Also, if you are so inclined, consider following my Medium account here. I usually post my longer State of Surveillance essays there in an alternate format and occasionally dip into some nonsurveillance topics there as well. And to the recent new subscribers, welcome!!

Alight, let’s dive into what’s quickly becoming one of the most divisive issues in the US.


Today’s Big Story: On The Vaccine Passport Dilemma

Earlier this month, New York City (where I’m currently based) crossed into uncharted territory in the US’ fight against Covid-19 and its frustratingly determined variants. With Delta cases surging and residents quickly becoming accustomed to loosened restrictions, Mayor Bill De Blasio officially made New York the first city to require workers and customers to show proof of vaccination to enter a plethora of businesses, from indoor dining and gyms to billiards halls and strip clubs. 

Digital vaccine passports—apps that can be scanned by businesses to verify an individual’s vaccination status—are a critical component to the city’s plan of varying the jab status for its nearly 8.4 million residents. Though New York is the first US city to issue such a mandate, others around the country are paying close attention, and many are creating their own state-specific digital passports.

Almost immediately, news of New York’s effort reignited a firestorm of debate over the privacy and society implications surrounding these still new vaccine passports. Though experts agree these passports represent one of the best bets to help safely reopen society, they also, in their present form, risk being hijacked by surveillance-friendly autocrats eager to expand their all-seeing reach.

Before we move on, it may be worth taking a moment to differentiate the (many) vaccine passports being rolled out and how they work. As of now, the U.K., France, Israel, Australia, China, and the European Union all have some form of nationalized-ish vaccine passport that’s already being used in varying degrees to allow vaccinated individuals to travel, eat out, go to bars, and engage in other “normal” activities. The US, notably, has avoided a nationalized vaccine passport, with the Biden administration saying it’s “not their role” to create or mandate one. 

At times, the passports differ greatly but generally, most will collect an individual’s name, birthday, the day they received their vaccine, the type of vaccine they received, and potentially other related health data. Passports like New York’s Exclecssior say they do not track a user’s location, however, privacy experts say the scanners used to validate those passports (say at a bar or a restaurant) can collect that data, and if so inclined, could use it to track an individual. 

The most likely information collected will include an individual’s name, birthdate, date of issuance, and type of vaccine received, and/or COVID-19 testing information.

The Pushback 

As most readers have no doubt heard, the pushback to vaccine passports came about as fast as the apps themselves were rolled out. The arguments against passports vary but for the purposes of this post, I’m going to break them into two large camps: privacy and social equity

On the privacy front, pushback to vaccine passports has largely mirrored hesitation expressed by experts to the adoption of Covid-19 contract tracing apps which I’ve outlined here in the past. In addition to worries around poeyential third-party tacking of location data, experts like Abert Fox Cahn of the Surveillance Technology Oversight Project (STOP) have expressed concern over the lack of transparency surrounding these apps. 

“I have less information on how the Excelsior Pass data is used than the weather app on my phone,” Fox Cahn told MIT Technology Review earlier this year. “Because the pass is not open source, its privacy claims cannot easily be evaluated by third parties or experts.”

There’s also an issue of effectiveness. In New York City, residents have the option of choosing between the Excelsior pass or the city-specific pass. I use the city pass because it’s significantly lower-tech, but it so low tech that it’s basically useless. As Fox Cahn wrote for the New York Daily News, the NYC app amounts to “nothing more than a camera app dressed up as a health credential.”

Others, like the Electronic Frontier Foundation, warn digital vaccine passports could easily be repurposed to serve as digital identification systems that outlive the pandemic and are used by governments as a catch-all identifier. These fears, as I’ll point out later, are turning out to carry more weight than some had first expected.

Activists warn a centralized digital identification system could be used to collect more granular types of personal data, like a person’s age, healthcare status, or even criminal history. All of that data could then potentially be stored in a large catchall database that could then be used to monitor and surveil individuals in myriad ways, mirroring in ways they types of always-on surveillance already seen in China and other authoritarian regimes. 

What’s more, nearly all of these vaccine passports are being developed in partnership with large tech firms who, as readers of this newsletter will know, definitely don’t have the best track record of keeping their user’s personal information secure. This issue is made worse in the US since it still lacks any meaningful federal data privacy laws placing limits on the types of information companies can gather or who they can share it with. Basically, by opting into vaccine passports, citizens are trusting tech companies to keep their world and treat sensitive data with a level of care and stewardship that they haven’t shown in other domains. 

That then leads us to the social issues. Digital vaccine passports, like all technology, reflect the social inequalities apparent in society. As the ACLU and other groups have noted, digital vaccine passports require an app, which requires technology that’s disproportionately out of reach to older and lower-income communities. These disparities could risk inflaming social inequality if passports are mandated for access to bars, restaurants, stadiums, museums, or any other communal space, in effect creating a class of individuals (typically affluent and tech-savvy) who are granted relief from life’s drudgery while the rest (ironically those also disproportionately likely to be “essential” workers) cannot.  

Zooming in on NYC, black and Latino resident lagged behind other groups in terms of vaccination rates and generally adopt technology at lower rates than other ethnicities nationwide. A recent Pew survey found that black Americans were slightly less likely than whites to own a smartphone and 11 percentage points less likely to have a home computer.  When all this is put together, advocates worry vaccine passports could actually amplify inequality and further degrade hemorrhaging public trust in government and the medical community. 

And then, there’s the conservatives. If you’ve only briefly read or listening to anything related to vaccine passports in recent months it would be understandable to assume the bulk of resistance has come from curmudgeonly conservative politicians, cracked-out conspiracy theorists, and hesitant skeptics. That’s partly true. Since vaccine passports are so closely tied with the vaccines themselves, and since conservatives (at least in the US) have shown the overwhelming resistance to vaccines, they’ve recently managed to fan the largest flames around passports. 

In some ways this conservative skepticism around vaccine passports mirrors some of the more dramatic fears from more level-headed privacy advocates; namely, fear of government overreach, external infringement on individual autonomy, and of course, comparisons to China. But while the direction of some of these conservative complaints follows a similar wind as experts, the motivation are often miles apart.

Dig deep enough into conservative complaints around vaccine passports and once often finds self-interested politicians stoking the flames of vaccine misinformation running rampant among their constituencies. Among everyday people, the skepticism is often limited to the invocation of the “V” word. Aside from a handful of civil liberty-focused libertarians, most average Republicans are unlikely to draw breaths of criticism around similar surveillance tactics being used at airports and other public spaces in the infallible name of patriotism.

These, of course, are generalizations, but the finer point is that it’s indeed worth drawing some distinction between civil society’s privacy concerns over vaccine passports and those of the most extreme on the political right, less we allow any valid concerns be sucked up into a mirage of opaque, pointless partisanship. 

Regardless, conservatives in the US (whether for the right reasons or not) are taking meaningful action against vaccine passports. As of June, 15 US states including Texas, Alabama, Arizona, and Florida have banned legislation banning vaccination passports wholesale. 

With all of this in mind though, polling shows the public is roughly split on their attitudes towards vaccine passports, with different levels of acceptance based on circumstance. A recent May Gallup poll, for example, found that 57% of US adults favor vaccine passports for airline travel and 55% support them for large gatherings like concerts or sporting events. I’d probably count myself among those. Support drops off though when asked if people would support vaccine passport requirements for workplaces (45%) or indoor dining (40%). 

It shoudn’’t be a surprise that these figures get much muddier when partisanship is taken into account. While 85% of self-identifying Democrats support vaccine passports for air travel, that figure drastically dips down to 28% for Republicans. Just 16% of Republicans support vaccine passport requirements for workplaces compared to 69% for Democrats.

Vaccine Passports Beyond the Vaccine 

I alluded to it earlier but on the privacy side of the vaccine passport debate, one of the primary concerns, written off at times as alarmist, from groups like the EFF revolve around a slippery slope argument that these passports would eventually be expanded to be used for more than just vaccine identification. You can probably already sense where this is heading but yes, there’s evidence that slide is happening, both in the US and abroad. 

Back in June, a FIOA request filed by the  Surveillance Technology Oversight Project revealed that IBM (the maker of New York’s Excelsior pass) had signed a three-year contract with the state of New York worth $17 million—a figure nearly seven times higher than the $2.5 million publicly disclosed. Though the Excelsior pass had originally been pitched as a limited, short-term solution, the contract revealed New York has instead asked IBM to provide a roadmap for making the app accessible for the state’s entire 20 million population. So why the discrepancy?

Around the same time, per The New York Times, IBM’s vice president of emerging business networks reportedly gave an interview where he admitted New York state was considering broadening Excelsior’s scope, with discussions underway of how the passport could be expanded to serve as a digital wallet capable of storing driver’s license information or other health data. As I write this, Excelsior pass currently allows users to upload an image of their driver’s license. 

Just this week though, another FOIA request from the Surveillance Technology Oversight Project found the expected cost for the Excelsior passes had swelled yet again, this time up to $27 million. That contract alluded to potential Phase 3 that expands the data to include vaccination results from residents in New Jersey and  Vermont.  Meanwhile, Google Apple and Samsung have all announced plans to create features in their phones that allow users to generate QR codes with their vaccination details embedded. This comes as Apple is preparing to roll out a new feature in iOS 15 that will let users store a digital version of their driver’s license alongside their credit cards and other personal information in its Apple Wallet feature.

This all sounds relatively sanguine for now, but there are risks. Earlier this year, officials in Singapore announced they had used data gathered from its TraceTogether contact tracing app to prosecute an individual in a murder case. Officials defended the move, saying they would only use the data (which they previously said would be limited to vaccination status) in “serious” crimes. Just one month later, Singapore passed a law officially allowing law enforcement to use data gleaned from its contact tracing app to be used in criminal investigations. In most countries, this promise to limit vaccine passports lies only in the word of governments and companies. According to Top10 VPN, of the 120 contact tracing apps available in 71 countries, 19 didn’t even have a privacy policy. 

Where Do We Go From Here? 

In the background of all of this uncertainty is a raging pandemic that, thanks to new variants, is re-ravaging countries and threatens to stay with society, in some capacity, for many many years. With this in mind vaccine passports, in some form, will likely be an unavoidable reality for cities, states, and countries looking to resume normal operations with some modicum of sanity.

The privacy issues outlined here isn’t necessarily a plea to excise vaccine passports. On the contrary, most of the fears expressed could be addressed by a more open, transparent, and public-facing approach to app design. If vaccine passports makers commit, for example, to actively push against collecting location data, they could actually present much greater upside with less risk than previous contact tracing app efforts. 

As someone living in New York City, I’ve personally downloaded an app and have already used to enter my gym. I opted to use the city app over the state’s Excelsior pass because it appears slightly more privacy-preserving, but concerns remain. As with everything else the pandemic has highlighted though, personal risk calculations are just that: personal. Though the privacy implications associated with vaccine passports are real and present they exist amid a realer and more present pandemic reality.

BMD

Share The State of Surveillance


Here’s What Else is New 

A coalition of more than 90 civil liberties groups around the world are calling on Tim Cook to stop the rollout of Apple’s controversial CSAM scanning tool 

  • The groups, which include the ACLU, the Center for Democracy and Technology, and others wrote they worried the tool “will be used to censor protected speech, threaten the privacy and security of people around the world, and have disastrous consequences for many children.” 

  • "Once this capability is built into Apple products, the company and its competitors will face enormous pressure — and potentially legal requirements — from governments around the world to scan photos not just for CSAM, but also for other images a government finds objectionable," the advocates wrote.

  • The bottom line: Any hope Apple had that this issue would get swept away in the news cycle current looks to be quickly vanishing. 

Cuba passed a new sweeping internet law that would criminalize the spreading of fake news. 

  • The move comes just one month after the country experienced its largest anti-government protests in decades, which the country has officially blamed on outside US agitators on social media. 

  • Cuba’s law is the latest in a growing number of countries passing restrictive legislation criminalizing online speech. 

China passed one of the world’ strictest data privacy laws … but it won't apply to government surveillance

  • Called the Personal Information Protection Law,  the law will require any individual or private company interacting with a Chinese user’s data to take steps to first obtain prior consent, and then limit data collection going forward.

  • Though China’s new rules may place some of the world’s strictest data restrictions on private firms, those same standards won’t apply equally to the government’s own internet activities.


That’s it for now! See y’all.

Apple's Child Abuse Privacy Inflection Point

Volume III, Issue I

Hello again everyone, I hope y’all have had a chance to enjoy the summer and take a break from one of the most perplexing years in modern history (recent variants notwithstanding).

It’s been some time since I’ve written here. As I’ve mentioned in my previous issues, work-life ramped up pretty quickly for me over the past six months, and, well, I got a little burned out. That said though, I’m back and am planning to pick up the pace with the newsletter, regualry releasing a mix of long and shorter form content. You can also expect to see more integration here between The State of Surveillance and my sister podcast, The Future is Ow. After about 15 episodes or so, I notice there’s an incredible amount of parallel there that think all of y’all would find beneficial. You can check that podcast out and subscribe here.

One final programming note before we get started. In an effort to avoid sinking into the pitfalls of burnout once again I’ve decided to take some time and revaluate what makes the cut for this newsletter. In doing so I tried to grapple philosophically with what the word “surveillance,” means to me. In the past, I’ve taken this to apply to invasive tech, troubling legislation, and dystopic examples of corporate overreach. Those all certainly fit the bill and they will continue to feature prominently here.

However, surveillance is bigger than that. To me, surveillance is also synonymous with the removal of autonomy. If an individual in a society is made to feel powerless to the point of inaction or silenced to the point of submission, then I think that too is a form of surveillance, whether or not a piece of what we would call advanced technology was ever imposed. With that in mind, there are many flavors of daily life that, under this broader interpretation, are worthy of exploring here. This could include authoritarian government leaders, escalating political polarization, the rise of misinformation and the collapse of public trust, anti-competitive practices by wanna-be monopolist companies, new tech that seeks to quell human autonomy, or maybe something else entirely.

On a practical level, this broader focus will allow me to write with more vigor, enthusiasm, and intrigue. Generally, though, I think it will make this space an interesting corner of the web for thinking critically about what it means to be surveilled, and maybe more importantly, what you can do about it.

And with all that out the way, let’s jump back into it with a story I think perfectly incapsulates many of the points outlined above.


Today’s Big Story: Apple's Child Abuse Privacy Inflection Point

In a sense, Apple’s recent privacy debacle surrounding the company’s decision to add new child sexual abuse media (CSAM for short) scanner to its products began a long time ago. Though the news made a splash last week, the features were reportedly being designed for months. These new features, which as we’re about to discuss below may fundamentally move the goal post on privacy writ large, may have skirted by largely unnoticed if a John Hopkins professor of cryptography had not leaked the details. This practice of rollout first and ask for forgiveness later is par for the course for Apple according to coworkers and experts I’ve spoken to over the past week. 

Explaining Apple’s new CSAM features: 

Let’s back up for a moment and explain exactly what’s caused the current privacy uproar. Last week, Matthew Green, the aforementioned professor, took to Twitter to sound the alarm over a new image scanning tool Apple was preparing to roll out called neuralMatch

The aim of neuraMatch is to scan images Apple users upload to iCloud, before they ever get to iCloud, to determine if they are uploading child pornography. Though other major cloud services already scan for child sexual abuse and other harmful content, Apple’s software works differently, scanning images locally on a user’s device before it ever reaches iCloud. In this way, Apple can supposedly maintain its stance on privacy by saying, unlikely other tech companies, that it does not scan its cloud service. But, as is becoming clear, scanning images on a user’s device before it reaches iCloud is a difference without a distinction. Many iPhone users, myself included, regularly have their images offloaded to iCloud without ever explicitly uploading those items themselves.

First, some more on how nueraMatch works. According to Apple, the feature uses a hashing technology called NeuralHash to scan images on a user’s device and cross-reference them against a database of known CSAM files maintained by the National Center for Missing & Exploited Children, also known as the NCMEC. 

If enough similarities between the hashes are flagged and if a certain threshold is met (one image isn’t enough for example,) the user’s data is then reviewed by an Apple employee and passed along to law enforcement if necessary.

Almost instantly, neuralMatch drew the fierce condemnation of security and privacy activists who worried the feature amounted to a backdoor to encryption. Others, fearful of government surveillance, worried Apple could expand its hashing tech to scan beyond child pornography to other less clear-cut cases like terrorism, protests, or other actions deemed deplorable by officials holding positions of power. 

Apple also revealed another new feature aimed at children under 12 using iMessage family accounts. Here, Apple uses machine learning to scan every image sent or received by a child user and if determined to include sexually explicit material, an alert will be sent to the account owner (usually the child’s parent). This feature immediately drew the ire of activists who warned such a feature could end up outing queer or transgender kids, or worse, provoke the wrath of an abusive family member. 

As mentioned above, while Apple confirmed plans to roll out the features for iOS 15 and macOS Monterey in coming months, it did so only after leaks and media reports made the controversial new initiatives public. Apple, the supposed bastion of privacy and free expression among a tech industry run by surveillance-hungry despots, also chose to roll this out without consulting advocates, academics, or any other concerned member of civil society. 

Share The State of Surveillance

Stirring up the Hornet’s Nest 

Condemnation of Apple’s unilateral privacy pivot came swiftly and from a diverse crowd of commenters. The Electronic Frontier, normally a supporter of Apple’s privacy-preserving efforts, described Apple’s move as an “about-face” on user privacy and accused Apple of opening up a backdoor for law enforcement. In a blog post, the EFF wrote, “at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor.” Well said.

Edward Snowden, the whistleblower who blew the lid on the NSA’s domestic spying program, took to Twitter and warned Apple could potentially expand its search capabilities beyond child sexual abuse content. In blog post, Cambridge University security engineering professor Ross Anderson warned Apple’s scanning feature could be used by authoritarian governments to entrap dissidents speaking ill of their leaders. Others, like the original leaker Matthew Green, expressed concern over Apple’s use of problematic child abuse hashes, which are pulled from a database inaccessible to the public, leaving third parties powerless to review Apple’s decision.

Apple’s Attempt at Damage Control 

In a memo leaked immediately after news of the features were made public, Apple software vice president Sebastian Marineau-Mes tried to dispel these concerns as “misunderstandings” and said Apple tools were bing rolled out with privacy in mind. 

That didn’t satisfy critics, leading Apple to quickly roll out several FAQ’s responding to concerns. In those FAQ’s, Apple rejected privacy and security advocates’ claims that its recently revealed neuralMatch CSAM image scanning tool weakens encryption and creates a backdoor for government surveillance and said it would refuse demands from governments to add non-CSAM images to its scanning tool.

But as many have pointed out since, it doesn’t really matter if Apple says they won’t expand the filter beyond CSAM. As soon as a government passes legislation requiring Apple to hand over non-CSAM data, Apple will legally have no choice but to comply. This is an important shift. Previously (as was the case when Apple refused to give into FBI requests to decrypt the device of the San Bernardino shooter’s phone in 2015) Apple could say it did not possess the technical ability to do so. Now it does. As soon as Apple rolls out its new update, the company will possess the ability to scan the images of user’s photos locally on their devices. Right now that’s limited to CSAM material, but a simple update could expand this to add the hashes of any other material a government deems necessary. 

It’s worth noting here that Apple, who has painstakingly developed a reputation as the vanguard of consumer privacy and a bulwark against government anti-encryption efforts, has succumbed to such requests in the past. Previously Apple has shipped phones without FaceTime in certain countries and has conceded to Chinese government requests to take down controversial apps and store user iCloud data and encryption keys on Chinese servers rather than the US.

CSAM: A Harbinger of What’s to Come?

Apple’s new features don’t exist in a vacuum. Governments all around the world have stepped up efforts to weaken encryption in recent years. 

  • In 2018, Australia became the first western country to pass legislation allowing law environment to require tech companies to hand over encrypted users data.

  • Then, late last year, the European Union adopted a resolution on encryption that critics warn could encourage companies to provide law enforcement with encrypted user data.

  • Meanwhile, last year a bipartisan group of US senators introduced the EARN IT Act, a bill aimed at combating child exploitation that activists warned would weaken encryption. Though the bill failed, the proposal reignited US calls to limit encryption. 

In general, governments and law enforcement eager to encroach on encryption have seen a convenient cudgel in child pornography. Knowing that consumers and politicians don’t want to be seen seemingly opposing efforts to reduce child pornography, these actors use the issue as a gateway into advocating for incremental encryption exceptions. But as the EFF and others have pointed out, encryption is a take it or leave it concept. An individual’s device is either secure, or it isn’t. Offering even slight exceptions to encryption breaks the security effort at its very foundation. 

What now?

The last two weeks have been a whirlwind for people who care about personal privacy encryption, and government surveillance. It would be easy to dismiss the new Apple news as a minor product change limited to Apple customers, but that would be a mistake. For years, Apple has stood apart as one of the few major companies willing to take a firm stand against a powerful government in favor of individual autonomy.

Psychologically, the fact that Apple, the world’s most valuable company by market cap is willing to hold firm on this position meant that other smaller firms could see privacy, even controversial privacy, as a business positive. If Apple chooses to move forward with these measures, they will have entered new territory, opening themselves up to the auspices of governments and making themselves essentially no different than any other large tech firm that regularly exchanges user personal information with governments as freely as a gust of wind. The impact this has on everyday users’ psyches may well indeed be profound. 

There is, of course still time. Apple could choose to slam the brakes on these new set of features and in turn, cling onto thier reputation as a bastion of privacy.

Unfortunately though, given their flippant, and at times hostile response to reporters and activists questioning them on the matter, that seems more and more unlikely. As is the case with so many critical issues, the future of millions of people is in the hands of multi-trillion-dollar corporations.

May they have mercy.



Here’s What Else is New

🦠 New York’s recent proof of vaccination requirement is reigniting privacy debates over vaccine passports 🦠

  • New York recently became the first major US city to include vaccine passes in part of their vaccination mandates.

  • As we’ve discussed previously in this newsletter, data from similar vaccine passports in countries like Singapore have been used against individuals in criminal proceedings.

  • In addition, New York State’s Excelsior Pass is reportedly being expanded to serve as a broader digital ID, not limited just to Coivd-19.

🥽 Facebook wants to build “the metaverse”

  • The company has officially created a new product unit aimed at creating a 3D social space using virtual and augmented reality tech.

  • The metaverse is a concept derived from science fiction, and refers to a shared virtual space where users live out large portions of their lives. 

  • The timing: Facebook’s grand metaverse proclamation drops at the same time as regulatory pressure on the company is reaching a boiling point. Facebook is running out of new users for almost all of its apps … this big push is arguably an act of survival. 

🇷🇺 Russia’s still trying to create its own independent internet but it’s encountering a fundamental problem 🇷🇺

  • Russia announced it has successfully run tests that will allow it to separate itself from the global open internet.

  • In practice though, it looks like Russia is creating its own local internet to access the larger global internet

  • The bigger picture: Russia has spent many years creating its alternative internet but it appears to want to have its cake and eat it too: it wants to operate totally independent and have more control but still be able to access the global internet and the content and services that provide. Those two visions appear antithetical to one another at the moment.

⚠️ Citizen, the vigilante neighborhood app, is hiring people to livestream crimes and emergencies ⚠️

  • A job posting discovered by The Verge showed Citizen was prepared to pay some users $200-$250 per day to record events and interview police and witnesses.

  • As my friend and podcast co-host Jonah recently put it, Citizen is basically paying people to become Jake Gyllenhaal from Nightcrawler.

And that’s all folks. See you all soon!

Are You Ready for Airport Iris Scanners?

Volume II, Issue XVIII

Hello again everyone, and happy spring! It’s warm and hopeful in NYC right now, two words I haven't used together in some time now.

Before we get started today I just wanted to remind everyone that I’ve got a new podcast out called The Future is Ow. The podcast discusses many of the same topics addressed here in a fun, conversational format with my good pal Jonah Inserra. If you’re interested please consider giving us a listen and a subscribe on Apple Podcasts or Sound Cloud. Thank y’all.

With that out the way, let’s get into it!

The Weekly Run-Down

✈️ 1: UAE Rolls Out Iris Scanners at Airports ✈️

The United Arab Emirates has started rolling out iris scanners to authenticate airline passengers’ identities during air travel.

  • The new surveillance system impacts 122 “smart gates” at Dubai International Airport. 

  • The move adds a layer of convenience by removing the need for a boarding pass or human contact at check-in but comes with a steep surveillance price tag. 

How it works: Passengers peep into a kiosk at check-in where a machine scans their eyeball. 

  • The scanner links the passenger’s biometrics data with their boarding pass and other flight information, allowing them to pass through security, immigration, and even reportedly enter Emirates Lounges without any supporting documentation.

  • Those iris scanners are linked up to the UAE’s national facial recogniton database, which includes troves of personal information on individuals 

Though the US has yet to employ widespread iris scanners, facial recognition has quietly crept into airports across the country. 

  • Since 2019, American Airlines, Delta, British Airways, and JetBlue have all experimented with facial recognition to speed up check-ins, and in some instances, used face scans to replace boarding passes.

  • In total, at least 27 US airports use facial recognition authenticators in some capacity. Nearly all of these cases are opt-out.

The allure of convenience: In both the UAE and the US, passengers have shown a willingness to sacrifice privacy for the promise of marginal efficiency.

  • Airlines, police, and biometric authenticator advocates argue the use of the technology increases security, reduces long lines, and helpfully limits human-to-human contact during a pandemic. 

  • Users are willing to sacrifice high levels of autonomy in the name of security and perceived convenience. 

  • A 2019  pew poll found that 59% of Americans found facial recognition use by law enforcement in the name of security as acceptable.

  • Separately,  A 2019 Experian survey found that 70% of global consumers were willing to share more personal data if that sharing came with perceived benefits.



2: 🇲🇾 Malaysian Publication Fined Nearly $124,000 for User Comments … But the Real Reason Was State Retaliation 🇲🇾

A Malaysian news outlet was forced to pay a fine over the contents of five user comments on one of its articles. Reporter’s from the site claim the comments are being used as a scapegoat to target its adversarial reporting critical of the government. 

News site Malaysiakini was found guilty of contempt of court and had to pay equivalent to about $124,000 USD.

  • The fine revolved around five comments from users on the site, which the government claims illegally insulted Malaysia’s judiciary. 

  • Malaysiakini had removed the comments from the site but not before government officials saw them. 

  • In its ruling, the Malaysian judges ruled Malaysiakini failed to properly vet the comments.

Malaysiakini and human rights organizations claim the real purpose of the fines was to intimate the site and stop them from publishing content critical of the government. 

  • Amnesty International released a statement calling the fines a “grave setback” for the country. 

  • Separately, a spokesperson for the US embassy in Kuala Lumpur spoke out against the ruling. 

The Malaysian ruling highlights the importance of platform protecting policies like Section 230 of the Communications Decency Act in the United States. 

  • On a very basic level, Section 230 protects newspapers and other publishers from being punished this very way.

  • At the same time, legislators and activists have called for revisions or outright reversals of Section 230 in the US, arguing its liability protections allow Facebook and other platforms to facilitate misinformation and hate speech without bearing any of the burdens. 

  • Newly elected President Joe Biden has signaled a willingness to revoke 230, but members of his government remain divided

Richard Paddock, The New York Times 


3: Myanmar Shuts Down Wireless Internet Amid Violent Coup 

The Myanmar military worked alongside the nation's internet providers to shut down wireless broadband last week. 

  • Providers said the shutdown request came at the behest of the Ministry of Transport and Communications who demanded, “all wireless broadband data services be temporarily suspended.” 

  • The shutdown order comes after weeks of regional crackdowns on internet and social media amid continued protests to the country’s military coup. 

  • With the most recent shutdown, internet access in the country is limited to individuals with fiber optic cable connections (a minority) and even that is being purposely throttled down to a trickle. 

Myanmar has been engulfed in violence since the military seized control of the country on February 1. 

  • The military took control following an election where former leader Aung San Suu Kyi won in a landslide. Officially, the country is under a year-long state of emergency.

  • Hundreds of people have died in clashes between the military and pro-democracy protests since the initial coup. 

  • Thousands have faced arrest San Suu Kyi and members of her National League for Democracy (NLD).

Internet shutdowns have become essential tools used by dictators and authoritarians around the world to supplement physical violence. 

  • Parts of Myanmar's shutdown far precede the coup, with regions populated by the ethnic minority Rohingya Muslims facing shutdowns dating back to 2019. 

  • Many of the online repression tolls being used by the current military junta were in fact put in place by the country’s democratically elected government, an uncomfortable reality highlighting the long-term risks posed by the passage of online repression tools regardless of the regime. 

Lily Hay Newman, Wired 


Share The State of Surveillance


4. DARPA wants Intel and Microsoft to Usher in The Next Era of Rncryption

Intel is partnering with Microsft to work on the next stage of encryption for the US military.

  • In recent years, encryption has evolved from a niche, expensive luxury reserved to the military and select institutions to a standard demand by consumers all around the world. 

  • WhatsApp, which has over 1 billion users worldwide, has helped lead this charge to normalize the encryption. 

  • Yet, while WhatsApp and other messaging services can encrypt messages at both ends of delivery, tech companies have largely failed to solve the problem of encrypting communications while they are in transit between recipients. 

  • That means a security agent could still “intercept” your WhatsApp or Telegram message and read its contents before they are encrypted. 

Intel and Microsoft are vying to solve this problem through hardware and software to develop homomorphic encryption.

  • Homomorphic encryption not only encrypts communications while they are moving between recipients but it goes a step further and offers the ability to perform calculations on data without decrypting it. Basically, it would be the strongest encryption available currently.

  • So, naturally, the US military, (specifically the Defense Advanced Research Projects Agency (DARPA)) wants it. 

  • IBM is also reportedly working on homomorphic encryption but isn’t part of the military partnership. 

This type of encryption technology might eventually trickle down to everyday people … but not anytime soon. 


Here’s What Else is New

🇷🇺 Apple caves to Russian government pressure to show users government-approved apps during iPhone set up 🇷🇺

  • The move is part of a recently implemented 2018 Russian security law.

  • Though Android phones are reportedly required to come with the apps preinstalled, Apple appears to have been able to work out an agreement with the government where users can opt-out of installing the apps. 

🎖️ A US military unit responsible for drone strikes is buying up location data from everyday apps 🎖️

🇮🇳 India is reportedly trying to develop its own internet 🇮🇳

  • In doing so it would follow the lead of China, Russia, and others in detaching from the global open web.

🤖 Drone maker Skydio is reportedly working with over 20 different police agencies in the US 🤖

🔒Whistleblowers at an Arizona prison claim faulty algorithms are keeping hundreds of inmates behind bars longer than their actual sentence 🔒


That’s it for now. As always, please feel free to reach out to me at thestateofsurveillance@gmail.com or Mack.degeurin@gmail.com

Have a nice weekend y’all.

Loading more posts…