Critical AI read.dukeupress.edu/critical-ai/issue Profile picture
Critical AI's first issue out: https://read.dukeupress ! Follow us here or @mastodon.social. Our editor often tweets from this acct on weekends.
Jul 4 31 tweets 5 min read
#DESIGNJUSTICEAI invites you to join us for an exciting emerging scholar panel & a plenary panel w/ labor activist Danile Montaung & author /editor/activist @bigblackjacobin -join us through this link to our program criticalai.org/designjustice/…
Image @bigblackjacobin /2 Zondi: 11 official languages - 81% of children can't read for meaning in any language - including local languages - so they fall behind
May 28 34 tweets 8 min read
Finally shrinking the pile to the point where I can thoroughly read this interesting essay on "illusions of understanding in sci research" due to AI.

Folks like @emollick probably should read it (perhaps he already has). /1 Image @emollick Here's the opener: most of us have seen all of these purported use cases (AI participants, AI researchers, etc.)
And we've also seen evidence of AI-written papers that suggest that the humans were asleep at the wheel. /2 Image
May 6 13 tweets 4 min read
Thanks to @biblioracle I'm looking at @sama's latest efforts to normalize his favorite product in its favored domain: education. While businesses understandably think 2x before shelling out for unreliable tech that could land them in court, what has higher ed got to lose?/1 Image @biblioracle @sama /2 Let's not forget that Sam is a college dropout who portrays education as an impediment to risk-taking.
Now he's recasting himself as a consultant to higher ed on the "rules of ethics."
That's right, renowned ethicist Sam is gonna school us on how rules should change.
Mar 17 22 tweets 6 min read
Folks, I'm concerned abt growing number of sources implying that chatbots & bot/search kluges like Copilot are appropriate for student research.
THIS IS FALSE & DANGEROUS /a Bots CAN'T adduce the sources of info w/in their training data; so the "blurry" modeling of these (unknown) sources is what produce the outputs on which posthoc searches are conducted /b
Feb 16 5 tweets 1 min read
Yes @OpenAI new text-to-video is impressive but here's 5 questions that journos & the public should be asking--relentlessly.

1. When will you release the datasets used for training this system so we know whose data was captured w/o consent & in potential copyright violation?+ Image @OpenAI 2. What were the conditions of labor for employing the human annotators who worked on this system. What were they paid? What toxic content were they exposed to at industrial scale and factory pace? +
Nov 20, 2023 16 tweets 4 min read
#CriticalAI posts some reflections on @_KarenHao & @cwarzel's always excellent reporting. Implicit in their context for their #OAI story in the Atlantic is that the charter's definition of "AGI," an always contestable term, is unstable, self-trumpeting, & potentially dangerous /1 Image @_KarenHao @cwarzel FTR, "AGI" conventionally refers to human-level intelligence; OAI's weirdly narrowly economic definition both gestures toward cultish hyper-utilitarianism AND makes it easier to market any useful tool as a step toward "AGI." /2
Jul 24, 2023 20 tweets 5 min read
With all due respect, Professor Mollick, the opening paragraph of your blog post is simply not accurate. Re opening para: There was nothing at all "unexpected" about the arrival of ChatGPT: and we shouldn't forget that the main difference b/w it and its precursors is very much to do w/ human labor. Image
May 20, 2023 17 tweets 9 min read
I am struck by the seeming disconnect in @theinformation's reportage on the WGA strike. They want to spin as it as "visual effects" designers are ready to adapt, writers not. /1 @theinformation Yet their reportage is careful enough to note that writers do not fear AI "replacement." They perceive the use of AI as a wedge to turn the entire category of "writer" into an upskiller of AI dreck for lower pay /2
Oct 17, 2022 37 tweets 16 min read
In the effort curb misunderstanding and #AIHype on the topic of language models (LMs), we're circulating a tweet thread to offer a baseline understanding of how systems such as OpenAI's GPT-3 work to deliver sequences of human-like text in response to prompts. /1 We're directing this thread especially to our humanist readers who may be only peripherally aware of the increased commercialization of (and hype about) "artificial intelligence" (AI) text generators.
NB: AI is itself a slippery term: we use it w/ caution.
/2