Meredith Whittaker Profile picture
President of @signalapp, Chief Advisor to @ainowinstitute (Also on Mastodon @mer__edith@mastodon.world, also on bsky @meredithmeredith.bsky.social)
প্রদীপ্ত মৈত্র (Pradipto Moitra) Profile picture @daffyduke@mamot.fr Profile picture 2 subscribed
Dec 11, 2023 6 tweets 1 min read
PSA: We've received questions about push notifications. First: push notifications for Signal NEVER contain sensitive unencrypted data & do not reveal the contents of any Signal messages or calls–not to Apple, not to Google, not to anyone but you & the people you're talking to. 1/ In Signal, push notifications simply act as a ping that tells the app to wake up. They don't reveal who sent the message or who is calling (not to Apple, Google, or anyone). Notifications are processed entirely on your device. This is different from many other apps. 2/
Nov 2, 2023 6 tweets 2 min read
I did not sign this statement, tho I agree “open” AI is not the enemy of “safe” AI

I can't endorse its premise that “openness” alone will “mitigate current+future harms from AI,” nor that it’s an antidote to concentrated power in the AI industry 1/

open.mozilla.org/letter/ This is esp true in an ecosystem where the term “open”, in the context of AI, has no clear definition, leaving it ripe for abuse + instrumentation by firms like Meta (who signed on + are currently brandishing this same statement to promo their ersatz "open" AI offerings). 2/
Sep 20, 2023 9 tweets 3 min read
UK Home Office's recent attack on e2e encryption moves into the realm of baseless propaganda, diverging significantly from much of the rest of UK gov & from established expert consensus, complete w media salvo

Let's review, it's important to recognize what they're doing...1/ Screenshot of guardian headline that says "Meta encryption plan will let child abusers hid in the dark, says UK campaign" After starting w emotionally upsetting statistics about child abuse, which work to activate our frontal cortex and evoke distress, they then characterize e2ee as tech that "overrides current controls in place that help to keep children safe." 2/ Screenshot from home office site, saying, "End-to-end encryption End-to-end encryption (E2EE) is a secure communication system where messages can only be seen by the sender and receiver.  Technology companies currently use encryption positively to keep your bank transactions and online purchases safe and secure. Encryption has many other uses throughout everyday life, but some social media companies such as Meta are proposing to implement or already have implemented E2EE in private messaging spaces.  E2EE overrides current controls in place that help to keep children safe and potential...
Aug 21, 2023 17 tweets 5 min read
Dear Lord Bethell, thank you for being willing to engage. I'm providing my reply here, in a thread & a linked letter, given that the public is clearly interested in this topic. I am hopeful that we can find common ground.

Here, then, is the letter: 1/ signal.org/blog/pdfs/LBRe…


Image
Image
2/ And here in the thread, I've added alt-text and provided each paragraph one by one, to enable people here to read and engage more easily. Dear Lord Bethell,   Thank you for your willingness to engage. I’m going to provide my perspective here, given that voters and the public are clearly interested in this topic. Please know that I am sincere, that I have worked on these issues for nearly two decades, and that I am no champion of Big Tech, as my background makes very clear. Indeed, I am opposed to Clause 122 (formerly 111), and similar measures, in large part because I see them as extending the pernicious surveillance power of large tech firms under the guise of providing “accountability.”
Aug 17, 2023 9 tweets 2 min read
📢NEW PAPER!

Where @davidthewid, @sarahbmyers & I unpack what Open Source AI even is.

We find that the terms ‘open’ & ‘open source’ are often more marketing than technical descriptor, and that even the most 'open' systems don't alone democratize AI 1/

papers.ssrn.com/sol3/papers.cf…

Image
Image
To ground a more material view of 'open' AI, we carefully review the resources required to create AI at scale, enumerating what can be 'open' (to scrutiny, reuse, and extension), and what constitutively remains 'closed'. 3/
May 30, 2023 5 tweets 1 min read
If ethical surveillance isn't possible (reader it is not), then neither is ethical AI.

Who's ready for THAT convo? This got a vibrant and fractious conversation going! I don't have time to reply and unpack, today of all back to back days. So I'll say here that I think a lot hinges on the definition of surveillance (which ≠ all information created about our shared reality)...
May 25, 2023 11 tweets 6 min read
📢NEW ARTICLE!!!

In which I connect Charles Babbage & his 19th c. blueprints for digital computation to industrial labor control & the creation of a regime of denigrated, disciplined "free" labor.

All of which has its roots in plantation slavery. 1/

logicmag.io/supa-dupa-skie… Labor division, worker surveillance & record keeping are techniques that emerged on plantations as ways to extract as much labor from enslaved workers as possible. Well before they were deployed in industrial factories. 2/
Apr 16, 2023 7 tweets 2 min read
NYT "AI" explainer misleads. Deep learning techniques date from the 1980s, & "AI" had been hot/cold for decades, not slow until 2012. There was no new "single idea" in 2012. What WAS new, & propelled the AI boom, was concentrated resources (data/compute) controlled by tech cos 1/ Image The access to massive data (aka surveillance) and compute made old "AI" techniques do new things. And showed that "AI" could profitably expand "what could be done" with the surveillance data already created by the targeted ad companies that dominated the industry. 2/
Jan 20, 2023 5 tweets 2 min read
Early 2000s profitable startups gave their handful of workers novel perks/freedom. These cos/their workplace culture got big. Late 2010s tech labor gained power + made demands. Now a hint of recession = excuse to break promises/reestablish dominance over workers. It's not about $ See e.g.
Jan 3, 2023 7 tweets 3 min read
OK! let’s talk about That Op-ed. The one that insisted not only that privacy is dangerous, but that not affirmatively building surveillance into communication tools is a radical ideological position. 1/ web.archive.org/web/2023010119… Dunking on the arguments is easy. And dunk many have, often with the gentleness of a professor grading a struggling student they don’t want to discourage. I’ll direct you to @evacide, @matthew_d_green, @kurtopsahl, @radleybalko, @bendreyfuss, @Iwillleavenow, @cFidd, @timbray 3/
Dec 23, 2022 5 tweets 1 min read
Thanks for raising this. We did disable some languages recently. In choosing which to keep, we opted for the official language of a given region (in the case of HK, Traditional Chinese). We did this for a couple reasons... 1/ 1. the most basic reason: good translation, which is especially important around privacy/security, is expensive and we're a nonprofit. 2/
Nov 16, 2021 10 tweets 4 min read
📢New paper! In which I work through a lot of my uncomfortable observations since joining academia, examining the alarming-but-quiet capture of academic AI research by big tech, what this means for how and what we know about AI, and how we can resist.

interactions.acm.org/archive/view/n… In it I trace the history of the recent turn to AI, which was less about algorithmic advances and more about the concentrated data and compute resources controlled by a consolidated tech industry. Who still gatekeepers these resources.
Sep 2, 2021 5 tweets 1 min read
Jen, thank you and solidarity. You're not alone. For the last years AI Now -- the team, myself and other leadership -- struggled to remove Kate and Jason Schultz from the organization and to recover from the toxic pattern of extraction and harm. I stand with the team. It cost us significantly, in terms of our individual work and mental health, and in terms of our collective access to and standing w/in the academic prestige networks where these two hold power.
Nov 10, 2017 10 tweets 2 min read
[Modest Thread Alert] Late, but thrilled to be a coauthored on this short piece outlining ethical priorities for neurotech and AI. An important step: AI models provide the foundation on which technologies like brain-computer interfaces increasingly rely. nature.com/news/four-ethi… With that, some additional comments, clarifying my personal position since some of my more nuanced edits aren't reflected in the published version...