Phenomenally interesting paper about how AI researchers talk about what they value in their research. Very glad the authors took the time to do this laborious but important work. I'm going to keep this in my desk so the next time I go on a rant about how ML is prescriptive [1/?]
rather than descriptive I can wack people who disagree with this paper 😛
I would actually go further than the authors of this paper do (I don't know if they disagree with what I'm about to say, but they didn't say it): I would say that corporate AI research [2/?]
is a propaganda tool that is actively and deliberately wielded to influence policy, regulation, and ethics conversations about technology. The very way mainstream AI research - even "AI Ethics" research - is framed obliviates consequences for the companies. [3/?]
"The determination that a model is good enough to deploy is the kind of fact that can and should be judged on moral grounds. Ignoring this facet of the conversation moves criticism away from the people who actually have the power over what is and is not deployed. [4/?]
"Technology companies want to have their cake and eat it too by painting the tools that they develop and sell for profit simultaneously as essential to modern life but also as grave dangers to society depending on which is more beneficial in the moment [5/?]
For example, OpenAI wants you to believe that GPT-3 is too dangerous to let the public use, but not too dangerous to let Microsoft sell to companies for profit."
That's a quote from my piece with @NPCollapse in @mtlaiethics's State of AI Ethics report. [6/?]
Another example is the "Google Gorillas" debacle. Remember when some people started complaining that Google was tagging black people as gorillas in photos? Google made a whole hubbub about how it would fix the algorithm and do research to prevent this kind of thing [7/?]
But they lied. They didn't fix the algorithm, they prevented it from labeling photos as containing gorillas. I won't pretend to know what the fundamental problem was, but regardless of what it is I hope we can all agree that this isn't a causal fix. [8/?]
What both of these propaganda campaigns do is frame the problem as something technological, fixable, and difficult. What they very definitely not do is question the people who made the underlying decisions. Models are not released in a vacuum: there is always at least one [9/?]
person whose job is to look at a model and say "yes, this is good enough to deploy." Even earlier in the chain, there's someone who approved the allocations of funds to do this research at all. If @OpenAI really thought GPT-3 was as dangerous as they initially claimed, [10/?]
surely the right response would be to not announce the model at all. People knew they were working on the model, sure, but if you pick the moment right it's quite easy to pretend that some incurable data contamination happened during training and it needs to be redone [11/?]
This is so realistic that it /actually happened/. They had to start again from scratch due to data cleaning bugs. Nobody would have bat an eye if they threw up their hands and said "we think it'll work but we've had to restart twice now and we're kinda done. It's a shame." [12/?]
No, they determined that the model was a danger to the public, to US national security, and to the epistemic commons and then /told the world that fact/. That's exceeding poorly thought out for someone who purportedly spent a lot of time thinking about this. [13/?]
(Note also that when people didn't buy that position they quietly dropped it).
In Google's case, the responsible thing to do is the same thing you always do when you have unexpected product failures: find where the problem is and fix it, or stop production of the product. [14/?]
If this were a physical device, instead of an algorithm, and @GoogleAI made the deliberate decision to push it back out with an obviously wrong patch and not tell anyone, and then the product hurt someone, Google would be up to their eyes in lawsuits. But here that's fine? [15/?]
@GoogleAI I'm not saying that algorithms and physical goods should be regulated the same way, but these are very basic considerations that I am confidant people raised at Google. And still, the decision was made to push it back out and mislead the public about what happened. [16/?]
@GoogleAI That is a decision that Google can and must be judged for. There is no rule that says Google has to deploy every algorithm it develops. Releasing an algorithm is a decision made by humans who are (to some extent) culpable for their decisions and their impacts. [17/?]
@GoogleAI Until we are honest with ourselves about that, we are not having a conversion about the ethical use of technology. We simply are not.
To tie this rant back to @Abebab's paper, check out this plot. It shows the proportion of annotated papers that valued different things. [18/?]
This is a choice the author makes. In these papers the authors overwhelmingly choose to value generalization, novelty, and simplicity. And they overwhelmingly chose to not value privacy, reproducibility, or respect for the law. [19/?]
And until we discuss these papers using this language, we are not having a conversation about the ethical use of technology. Values don't come out of nowhere. They come from humans who choose to value those things, cultures that encourage humans to value them, and social [20/?]
Let's be honest, nobody wants to read 21 consecutive tweets in Twitter's UI.
@jackclarkSF made an interesting related point about Google's TFRC and #EleutherAI:
"Factories are opinions: Right now, it’s as though Google has specific opinions about the products (software) it makes in its factories (datacenters), yet at the same time is providing...
@jackclarkSF "unrestricted access to its factories (datacenters) to external organizations. It’d be interesting to understand the thinking here – does TFRC become the means by which Google allows open source models to come into existence without needing to state...
whether it has chosen to ‘release’ these models?"
I don't know. I'm the person being let run amok in the factory that @jackclarkSF is talking about and I have no idea how Google as an institution - or TFRC for that matter - think about what I do with it.
For a short description of why we think we are doing something moral, see the link below. For a longer description, @ me (but not today. Do it when I'm not supposed to be doing homework.)
feedback mechanisms that reinforce those decisions.
A now popular refrain is to say "technology isn't neutral." That's true, but the technology isn't the thing we should be judging. It's the humans who designed, funded, and evaluated the technology. They are not neutral. [21/21]
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Great write up about the crazy cool art #EleutherAI members have been learning to coax out of GANs with CLIP! Credit assignment with stuff like this is hard, but @jbusted1@RiversHaveWings@BoneAmputee and @kialuy are some of the people who have made this happen.
@jbusted1@RiversHaveWings@BoneAmputee@kialuy They’ve been doing some visionary work with human-guided AI-generated art for the past two months, and it’s phenomenal that they’re starting to get the recognition they deserve. Several more people who either lack twitters or whose handles I don’t know deserve applause too