MMitchell Profile picture
Interdisciplinary researcher focused on shaping AI towards long-term positive goals. ML & Ethics. Same content in the Sky, Threads, & the Prehistoric Elephant
Oct 8 5 tweets 2 min read
Marie Curie was the first woman to win the Nobel Prize. This didn't happen because she herself fought for it -- although she did fight hard the whole way to even get to that point. 👩‍🔬
What happened was that the award was offered to her husband, Pierre. (1/n) 🧵 Pierre refused to accept unless it were also extended to Marie -- his wife, but also his coworker and fellow scientist. Together, they made fundamental advances.

And so Marie Curie became the first female to break the Nobel Prize barrier... (2/n)
Apr 15 9 tweets 2 min read
Offering a sort of gift: If you manage someone in Ethical/Responsible AI, please give them license for regular mental health days off. To step away from a headspace that often requires saturating in the darker side of things. I'll explain some of why. 🧵 1/ Having "fucks to give". A job that constantly requires (among other things) empathy is one that can quickly rob you of all the fucks you have to give. And there's a bleed-over effect: Losing fucks to give for *all* of AI and ML.
aworkinglibrary.com/writing/unifie…
Feb 25 16 tweets 4 min read
I really love the active discussion abt the role of ethics in AI, spurred by Google Gemini's text-to-image launch & its relative lack of white representation. As one of the most experienced AI ethics people in the world (>4 years! ha), let me help explain what's going on a bit. This screen grab shows CNN asking Google Gemini to create an AI-generated image of a pope, as well as the tool's response. Clare Duffy/CNN via Google Gemini. Source: https://www.cnn.com/2024/02/22/tech/google-gemini-ai-image-generator/index.html One of the critical pieces of operationalizing ethics in AI dev is to articulate *foreseeable use* (including misuse): Once the model we're thinking of building is deployed, how will people use it? And how can we design it to be as beneficial as possible in these contexts? 2/
Feb 19 29 tweets 8 min read
OpenAI's Sora is out! Creating video from text. Similar models from other tech companies will likely follow. There are COOL technical things and NOT COOL social things to know about. A super quick 🧵. 1/ COOL TECHNICAL:
Why is this a particularly notable launch? In part bc we see realistic images with *multiple frame coherence*. A few yrs ago, we started being able to produce *single realistic images* from text prompts. Now, we have *hundreds* logically following one another. 2/
Feb 12 10 tweets 4 min read
With the rise of AI-generated "fake" human content--"deepfake" imagery, voice cloning scams & chatbot babble plagiarism--those of us working on social impact @huggingface put together a collection of some of the state-of-the-art technology that can help:
huggingface.co/collections/so… @huggingface 1. Audio watermarking. This embeds an imperceptible signal that can be used to identify synthetic voices as fake. Work from Guangyu Chen, Yu Wu, Shujie Liu, Tao Liu, Xiaoyong Du, Furu Wei ()
Demo by @ezi_ozoani github.com/wavmark/wavmark
huggingface.co/spaces/Ezi/Aud…
Jan 4 29 tweets 6 min read
Reflecting on Claudine Gay, I'm reminded that a fundamental of racism--that we should all be aware of--is the disparate application of rules: People from one race* are disproportionately punished for "breaking a rule" that ppl from another are virtually never punished for.🧵 This has a few parts:
1. Being flagged as breaking rule where ppl from other races wouldn't be flagged
2. Having the system *determine* that you've broken a rule when the system wouldn't determine that for others
3. Being subjected to more extreme punishment for the rule-break
2/
Nov 5, 2023 16 tweets 4 min read
AI regulation: As someone who has worked for years in both "open" and "closed" AI companies, operationalising ethical AI, I'm dismayed by battle lines being drawn between “open” and “closed”. That's not where the battle should be--it's a distraction from what we all agree on. 🧵 Within tech, across the spectrum from fully closed to fully open, everyone generally agrees that peoples’ safety and security must be protected. That can mean everything from stopping identity theft or scamming, to mitigating psychological trauma from abusive bots.(2/n)
May 30, 2023 15 tweets 3 min read
Another good from @jjvincent.
2 important points: (1/2ish)
- These systems are being used as search, whether or not it's what OpenAI intended. By recognizing the *use* (both intended & unintended but foreseeable), companies can do much more to situate their products responsibly. *cough cough* model cards. Here's the most recent annotated version from me @huggingface:
huggingface.co/docs/hub/model…
(....more...)
May 1, 2023 16 tweets 3 min read
Reporting from @CadeMetz on Geoff Hinton's Google departure. A few things stand out to me; 🧵time (promise it'll be short).
nytimes.com/2023/05/01/tec… @CadeMetz One of the most personally pressing issues is that this would have been a moment for Dr. Hinton to denormalize the firing of @timnitGebru (not to mention many that have recently followed). To say it was the wrong choice. Especially given his statements supporting ethical work.
Apr 25, 2023 20 tweets 12 min read
@CriticalAI @hackylawyER "Foundation model" was coined by Stanford and shared to coincide with Stanford's launch of CRFM, which then Stanford used to center **itself** as "the" place to work on language models.
The term is political and inappropriate.
The definition itself is also problematic. @CriticalAI @hackylawyER Circling back to answer the question that I think was actually being asked. =)
That definition is not the definition of "foundation model" that Stanford gave, although their own definition was kinda inconsistent throughout the paper. But one thing that stood out to me was...(1/n)
Apr 17, 2023 12 tweets 5 min read
Okay, @60Minutes is saying that Google's Bard model "spoke in a foreign language it was never trained to know." I looked into what this can mean, and it appears to be a lie. Here's the evidence, curious what others found. 🧵 @60Minutes 1. The exact language being referred to seems to be "Bangladeshi".
dailymail.co.uk/news/article-1…
I can't stomach the 60 minutes piece; here's the Daily Mail naming the language. (Will check transcript when avail.)
Apr 16, 2023 10 tweets 2 min read
Just read the draft Generative AI guidelines that China dropped last week. If anything like this ends up becoming law, the US argument that we should tiptoe around regulation 'cos China will beat us will officially become hogwash.
Here are some things that stood out. 🧵 First, the "support the state" socialist bent is predictable enough. It suggests a level of censorship we do NOT have in the US.
The rest of the proposal has stuff I like quite a bit.
Mar 21, 2023 8 tweets 3 min read
Had a big groan on G's framing of Bard. One thing that stood out: Google saying that one "collaborates" with Bard, not that one "uses" Bard. Collaboration requires 2+ agents acting with their *own volition*.
@jjvincent hints at this issue, and more. More of my thoughts below.🧵 @jjvincent Using wording like "collaborate" is a subtle way of adding an extra air of intelligence to the system without saying it outright; it's also a way to obscure the fact that they are releasing this technology and asking people to help improve it without compensation.
Mar 20, 2023 6 tweets 2 min read
"Who saw LLMs coming?" Like, everyone working on LLMs?
If the question is around "Who saw the potential of language models to work"? then me and several others (some paper links in thread 🧵). Erasing the voices of women, one over-confident assertion at a time.
Don't be that guy. I should clarify that my work has been largely *multimodal*, incorporating language models with vision models. Spoiler alert: Multimodal models will even further progress in the near future.
Dec 20, 2022 10 tweets 2 min read
In order to understand why ChatGPT can't replace Google Search, it's useful to understand the early days of web search and the role that PageRank played. 1/n Before PageRank, a search would return a slew of websites of mixed utility, quality, and veracity. The results were directly tied to matches between what you queried and the text on the pages. 2/n
Jul 18, 2022 29 tweets 8 min read
Last week, a major AI milestone was hit: the BLOOM model was released, for everyone (including you!) to examine. What is this, and why is this important? 🧵👇 huggingface.co/bigscience/blo… Recently, many around the world have been introduced to what a "Large Language Model" (LLM) is because of recent news that some people think AI has become sentient. What "AI" has meant in these discussions is based in large part on LLMs.
Jul 16, 2022 4 tweets 2 min read
Nerd news: I am SO STOKED that *all* @huggingface-contributed models on the Hub (huggingface.co/models) have model cards; popular models & those w >10k downloads have high-quality cards, written & combed through manually. Amazing work from @mkgerchick and @_____ozo__ ! Note that many of these are in PR state still. Ahem ahem ahem.
Jun 12, 2022 23 tweets 4 min read
More minor things giving me mixed feelings about the article.
1-Google comms' continued attempt to say they have "ethicists" working deeply on these issues. I agree that Ben Zevenbergen & Johnny Søraker are awesome, but also (apologies) Google's inability to hire women here... is multi-faceted, and not unrelated to demeaning practices, particularly towards women in that organization. Given how they've poisoned Google as a place for tech ethics + women, I don't think ethics-informed women would agree to join (& I'd encourage them not to; can discuss).
Jun 12, 2022 9 tweets 3 min read
A lot of mixed feelings about what's being reported in this great article from @nitashatiku; everything from my appreciation of @cajundiscordian, to my anger@ a few ppl in Google leadership, to my position as a relatively advanced researcher in this field.
washingtonpost.com/technology/202… In 2020, before @timnitGebru and I were fired, we saw some things w large language models that deeply concerned us.
For me, 1 concern was connected to my training in psycholinguistics & how we process language.
We wrote a paper trying to explain; Google fired us. Here's the deal:
May 11, 2022 12 tweets 2 min read
Have had a few conversations recently with Googlers about whether building on the foundations of what I had set up there with ethical/responsible/fair AI is *normalizing* what Google did to me or *carrying the torch* of my vision. 🧵 below for those who this is relevant to. The issue I feel is that, while it is definitely carrying the torch, it is *also* normalizing. It would be different if Google apologized for what it did, or recognize it has done any wrong. Google did (rightfully) give most of you raises/promotions/more influence/etc.
Jan 18, 2022 21 tweets 4 min read
On this date 1 year ago, under Google's employment, my life was about to change. Tomorrow would be the day that Google publicly put out a statement about me that many have understood to be "smearing". There's a lot to say...(1/n) First, if it were possible for me to share nitty gritties of everything that happened w/o G combing through to find ways to sue me (as I assume they've continued to do), I would be sharing how f'ing principled I was in dealing with, and speaking up about, discrimination. (1/n)