PSA: We've received questions about push notifications. First: push notifications for Signal NEVER contain sensitive unencrypted data & do not reveal the contents of any Signal messages or calls–not to Apple, not to Google, not to anyone but you & the people you're talking to. 1/
In Signal, push notifications simply act as a ping that tells the app to wake up. They don't reveal who sent the message or who is calling (not to Apple, Google, or anyone). Notifications are processed entirely on your device. This is different from many other apps. 2/
What's the background here? Currently, in order to enable push notifications on the dominant mobile operating systems (iOS and Android) those building and maintaining apps like Signal need to use services offered by Apple and Google. 3/
Apple simply doesn’t let you do it another way. And Google, well you could (and we've tried), but the cost to battery life is devastating for performance, rendering this a false option if you want to build a usable, practical, dependable app for people all over the world.* 4/
So, while we do not love Big Tech choke points and the control that a handful of companies wield over the tech ecosystem, we do everything we can to ensure that in spite of this dynamic, if you use Signal your privacy is preserved. 5/
*(Note, if you are among the small number of people that run alt Android-based operating systems that don't include Google libraries, we implement the battery-destroying push option, and hope you have ways to navigate.) 6/
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I did not sign this statement, tho I agree “open” AI is not the enemy of “safe” AI
I can't endorse its premise that “openness” alone will “mitigate current+future harms from AI,” nor that it’s an antidote to concentrated power in the AI industry 1/
This is esp true in an ecosystem where the term “open”, in the context of AI, has no clear definition, leaving it ripe for abuse + instrumentation by firms like Meta (who signed on + are currently brandishing this same statement to promo their ersatz "open" AI offerings). 2/
As coauthors & I show (paper👇), “open” AI can, in some forms, ensure transparency+ reusability+ extensibility. This is good. But it *does not* level the playing field in the concentrated AI industry: the resources needed to create/deploy AI remain in the hands of a few firms. 3/
UK Home Office's recent attack on e2e encryption moves into the realm of baseless propaganda, diverging significantly from much of the rest of UK gov & from established expert consensus, complete w media salvo
Let's review, it's important to recognize what they're doing...1/
After starting w emotionally upsetting statistics about child abuse, which work to activate our frontal cortex and evoke distress, they then characterize e2ee as tech that "overrides current controls in place that help to keep children safe." 2/
Subsequently, they engage in a specious rhetorical move common to this latest anti-encryption wave: they conflate social media broadcast platforms with messaging services, and imply that e2ee messaging will stop efforts to mitigate the sharing/broadcast of CSAM. This is false. 3/
Dear Lord Bethell, thank you for being willing to engage. I'm providing my reply here, in a thread & a linked letter, given that the public is clearly interested in this topic. I am hopeful that we can find common ground.
Where @davidthewid, @sarahbmyers & I unpack what Open Source AI even is.
We find that the terms ‘open’ & ‘open source’ are often more marketing than technical descriptor, and that even the most 'open' systems don't alone democratize AI 1/
To ground a more material view of 'open' AI, we carefully review the resources required to create AI at scale, enumerating what can be 'open' (to scrutiny, reuse, and extension), and what constitutively remains 'closed'. 3/
We find that while a handful of maximally open AI systems exist--offering transparency, reusability, and extensibility--the resources needed to build AI from scratch, and to deploy large AI systems at scale, remain in the hands of a few large companies. 4/
If ethical surveillance isn't possible (reader it is not), then neither is ethical AI.
Who's ready for THAT convo?
This got a vibrant and fractious conversation going! I don't have time to reply and unpack, today of all back to back days. So I'll say here that I think a lot hinges on the definition of surveillance (which ≠ all information created about our shared reality)...
...but IS a very a broad category that requires us to recognize the relationships of domination encoded in most data creation practices, even if the pretext of these practices is "good", alongside the constitutive characteristic of data to escape the intentions of its author...
In which I connect Charles Babbage & his 19th c. blueprints for digital computation to industrial labor control & the creation of a regime of denigrated, disciplined "free" labor.
All of which has its roots in plantation slavery. 1/
Labor division, worker surveillance & record keeping are techniques that emerged on plantations as ways to extract as much labor from enslaved workers as possible. Well before they were deployed in industrial factories. 2/
Babbage was both the early co-designer of digital computing & an influential theorist of labor discipline. Both Babbage's "engines" & his labor theories repackage, expand on, and encode plantation techniques, particularly labor division and worker surveillance. 3/