A thread on the recent report published by @OpenAI: 🧵
Academic researchers often lack sufficient levels of introspection to understand the potential abuse of the technology they help create through the word list suggestions they make to AI developers, Data Scientists, & Trust and Safety at big tech companies in The United States.
Direct attempt to influence policy through recommendation.
This is exactly what plays out at big tech companies.
Academic researchers suggest word lists to monitor for “misinfo.”
Trust and Safety & Data Scientists then follow the recs provided from the academic experts.
Aka: One option for the US is to monitor the term XX.
We find this term to be likely associated w/ misinfo.
Trust & Safety tells Data Scientists to add new misinfo word to param.
Data Science team adds word.
No one stops to question if the word was ever misinfo.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Their definition of “fairness in ML” often means rewriting history.
If they don’t like the results, they will literally change them to align to their worldview.
It’s hard to explain how dangerous this is for the future of humanity in a tweet.
I don’t think people truly understand what “bias in AI” or “responsible machine learning” means. It does not mean what you think it does. It often means the exact opposite. The field has been completely weaponized by academic institutions and big tech alliances to “combat… twitter.com/i/web/status/1…
“Step forward Fabula, which has patented what it dubs a “new class” of machine learning algorithms to detect “fake news” — in the emergent field of “Geometric Deep Learning”; where the datasets to be studied are so large and complex that traditional patentimages.storage.googleapis.com/19/f6/c8/1e402…… twitter.com/i/web/status/1…