Hello again everyone! Just a quick reminder to check out this newsletter’s sister podcast, The Future is Ow, anywhere you get podcasts. You can check out our latest episode on Apple Podcasts here or at SoundCloud below.
Alright, with that out of the way, let’s get going!
Today’s Big Story: Apple Walks Back CSAM Detection System...for Now
Well, this is a change of pace. In a techno futurist world often seemingly hellbent on forward movement in spite of collateral damage, it’s rare to see the tides of resistance reel in the surveillance sea. Yet, that’s exactly what happened last week, though as we’ll get into, the change may represent more of an ephemeral reprieve than a codified decree.
As discussed at length in a previous issue of this newsletter, Apple recently revealed its plans to release an image detection tool called NeuralHash which works by scanning (though Apple disagrees with the framing) images on a user’s device and cross-references them against a database of known child sexual abuse media (CSAM) files maintained by the National Center for Missing & Exploited Children.
If enough similarities between the hashes are flagged and if a threshold of 30 images is met, the user’s data is then reviewed by an Apple employee and eventually passed along to law enforcement if necessary. This scanning impacts images being uploaded to iCloud, but the scanning itself takes place locally on a user’s device, a marked departure from previous CSAM prevention method. The effrt was the focal point of a cascading wave of backlash, including from 90 civil and digital rights organizations who called on Apple to send the tools to pasture.
Ultimately, Apple is trying to strike some sort of middle ground by providing an alternative to scanning iCloud images in totality, but as multiple privacy advocates quickly pointed out, such a tool could potentially be co-opted by authoritarian governments around the world to scan for images other than CSAM material. And though Apple has repeatedly said it won’t expand its nueralHash tool beyond CSAM material, as soon as a government passes legislation requiring Apple to do otherwise the company will have no choice but to comply.
Which leads us to last week. On Friday, followings weeks of push back and a volley of back and forth debates, Apple announced it would delay the feature’s rollout. Here’s Apple’s statement per CNBC:
“Last month we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material,” the company said in a statement. “Based on feedback from customers, advocacy groups, researchers and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”
That’s a major reversal from just a few weeks ago where Apple appeared determined to ram through the tools in spite of mounting opposition. But this “reversal” really should have never had to happen in the first place.
One of the primary concerns around Apple’s new tools had nothing to do with the technology itself, but instead with its surreptitious rollout approach. News of the features (which would place digital fingerprints of known CSAM material on every iPhone) wasn’t made widely known until a researcher risked his reputation to leak the details. What’s more, Apple, widely regarded as a bulwark for privacy amongst an increasing surveillance sympathetic tech landscape, refused to reach out to any critical member of civil society to weigh in on the update prior to launch.
Still, the news was met with cautious optimism from privacy and security experts including former Facebook CSO Alex Stamos. “I think this is a smart move by Apple,” Stamos told Wired. “There is an incredibly complicated set of trade-offs involved in this problem and it was highly unlikely that Apple was going to figure out an optimal solution without listening to a wide variety of equities.”
In addition to the scanner delay, Apple said it’s also pumping the brakes on an AI system that it had developed to identify explicit images sent and received by users under 13 and subsequently send an alert to the account holder (usually a parent). For context, many advocates warned this feature risked potentially outing queer or trans children or could run others afoul with abusive parents.
Though news of the delay represents a significant victory for privacy advocates, not all welcomed the decision. Child safety groups, including the National Society for the Prevention of Cruelty to Children (NSPCC) admonished Apple for giving in to pressure.
“This is an incredibly disappointing delay,” Andy Burrows, the NSPCC’s Head of Child Safety Online Policy, told The Guardian. “Apple were on track to roll out really significant technological solutions that would undeniably make a big difference in keeping children safe from abuse online and could have set an industry standard.”
A Temporary Ceasefire
Apple’s delay is significant in that the company is finally acknowledging its refusal to listen to critics, but the debate over the technology and its rollout are far from over. In announcing its delay, Apple also implicitly confirmed its unwavering intention to move forward with the tool in some capacity. And as others have pointed out, it’s unclear how any such scanning tool could exist without fundamentally altering the dynamics of encryption and privacy as we know it.
Matthew Green, the original researcher who leaked the features, suggested Apple could limit the scanning simply to shared iCloud accounts as opposed to individuals’ devices but even that comes with its own set of privacy tradeoffs.
Delays and conversations are of course welcome, but one is left with the lingering feeling that Apple is simply buying time, hoping to step outside the news cycle before reintroducing the features to a less acutely hostile audience.
It’s with that in mind that the Electronic Frontier Foundation, one of the most vocal groups opposing the new tools, came out and immediately called on Apple to abandon the new tools entirely.
“EFF is pleased Apple is now listening to the concerns,” the organization wrote, “but the company must go further than just listening, and drop its plans to put a backdoor into its encryption entirely.”
Why Now?
While researching this story and hashing out the details with friends, I was struck by the sense that all of this felt quite unique. As a writer coming to age in wake of September 11th and amid a gushing wave of techno optimistic fervor, it often felt as if Silicon Valley behemoths and the titans of the US state security apparatus regularly passed familiar ground, both regularly escaping scrutiny and constantly heralded as benevolent caretakers of society.
But eventually, things started to change. The wars dragged on, the body counts rose, city after city were reduced to rubble, only to have an invading force retreat nearly two decades later, leaving its experiment largely the same as it was once before. It’s no wonder then that public trust in the military has tanked to near record lows.
In a similar vein, the lacquered pedestal Big Tech once stood upon has gradually been swept away, revealing a figure increasingly guided not by utopian visions of increasing human flourishing but by acts of transparent deception. I’d wager this unraveling ramped up with the Snowden revelations, but its continued to metastasis with regularly scheduled horrors, from the Cambridge Analytica scandal in 2018, to a Facebook-fueled genocide in Myanmar and followed up by the storming of the US capitol by mislead retrogrades.
Said more bluntly, tech companies (and just about every other once hallowed institution) are hemorrhaging trust.
Don’t take it from me. This same point was made in a recent report from global communications firm Edelman’s which found public trust in tech companies was at its lowest point on record, both in the US and more generally in 17 of 27 markets surveyed.
Much of this rapid decline in trust can be traced back to comprises made over user privacy. According to Edelmans, 34% of adults surveyed worldwide viewed data privacy as a social issue brands should address … and yet users are still constantly being let down.
That has consequences. A separate survey from research firm Geneys found that misusing or abusing personal data was the top reason respondents would lose trust in a company. And last year, over half (52%) of US adults said they decided not to use a product or service based on fears how about much personal information would be collected about them, with 15% describing sharing general personal information as “problematic,” per Pew Research.
With all these figures in place in starts to make more sense why the time was right for a major social backlash to Apple’s CSAM scanner. As one of the few companies to draw a line in the sand in favor of privacy, the reversal came as a feeling of betrayal to many. Though it seems likely Apple will still roll out this feature in some capacity in the coming months, the bigger story here that’s less talked about is just how much the global demand for privacy has shifted in recent years. If this had happened five or ten years ago, I doubt we would have seen the same level of resistance.
What that means for the future though remains to be seen. While everyday users appear to see the value of digital privacy, governments and companies all around the world are moving in the opposite direction. At some point, these two opposing forces are destined to clash, and what comes out the other side is still anyone’s guess.
BMD
Here’s What Else is New
Facebook released its Ray-Ban style “smart” glasses
Users will be able to take photos and videos with two onboard 5 megapixel cameras and listen to audio through built-in speakers.
The public has known about the glasses for years and during that time Facebook has repeatedly lowered the bar in terms of what to expect from their actual technical ability.
These are not AR glasses in any meaningful way and to call them smart is a stretch. In reality, they are little more than a slight aesthetic improvement from Snapchat’s first pair of Spectacles released years ago.
Even if the glasses amount to little more than a small GoPro, they do still present a huge privacy issue that has so far been left unanswered. Who gave Facebook the okay to return every one one of its customers into a walking CCTV camera?
Lucas Matney, TechCrunch
The LAPD are regularly collecting and monitoring social media information from suspects they stop
The document, obtained by the Brennan Center for Justice found the officers regularly monitor the content with little oversight.
LAPD is also building a new tool called Media Sonar, “which can build detailed profiles on individuals and identify links between them.”
In the past, the LAPD has reportedly used Twitter analysis to track the movements of anti-Trump protesters on May Day per the Verge.
Mary Pat Dwyer, The Brennan Center For Justice
Australia’s high court ruled media sites can be held liable defamatory comments left on their site
The move may force media companies and social media sites to heavily police or entirely remove public comments in order to avoid facing litigation.
For a sense of the scale of the issue, David Rolph, a professor of law at the University of Sydney, said the ruling “may mean anyone who runs a social media page can theoretically be sued over disparaging comments posted by readers or random group members — even if you aren’t aware of the comment.”
The ruling is a reminder of what could happen if US legislators make good on their claims to want to revoke Section 230 protections.
James Vincent, The Verge
That’s it for now. See y’all next week!