After a walk, I have rethought my reaction to this paper.
Reading it was what prompted my "maybe I'll give up & become a troll" tweet. But I'm now encouraged by it.
First, the depressing aspect: a whole section of this paper traffics in ancient techbro stereotypes, plus one new one they've invented themselves. These snidely offered stereotypes are basically offered up unsupported, as if we ALL KNOW AMIRITE? And this is an ACM paper!
So I'm like, this sneering collection of techbro tropes was published by the ACM, which means the citadel has fallen. It's all over. Stick a fork in American tech leadership. But then I thought about their "Ethics Unicorn" archetype, & it hit me: they're eating their own.
They hate the "Ethics Unicorn" because he's actually trying to do the right thing. He's trying to do better, & to do a bunch of ethics stuff at his dev job. But this just makes him a "white savior" type of figure (they don't use that racialized language, but that's the paradigm).
This is friendly fire. They now hate the person who is trying to do the thing they've been complaining that engineers won't do, i.e. be ethically aware, & incorporate that into their process.
So now I think maybe since this is a type of "eating their own" behavior, it's more terminal and the whole thing is closer to just imploding in a YA-literature-style circular firing squad of one-upsmanship, recriminations, & ruined careers. Here's hoping.
Actually, on re-checking it I see that this is just a FAccT conference paper, so not fully peer reviewed & pretty much par for the course for FAccT. So maybe the citadel has not quite fallen, yet.
So I read this paper, & there's an entire subsection dedicated to a new supposed new (problematic) type of figure — the Ethics Unicorn. But no example of such a person, or even such thinking, is given. I have never encountered this. It seems like folklore.
I mean, maybe I am not in trendy enough tech circles? Maybe some of the engineers at the NYT who have a lot of opinions about what the edit staff should & shouldn't be publishing would fall into this category?
At any rate, the lone citation there is to an explainer on the "full-stack unicorn developer." This Ethics Unicorn character is left to the imagination, I guess.
I think nobody really realizes that this particular fight is coming, not even VCs. At some point, we will all fight on here over which party gets to be the editor whose values & linguistic quirks are reflected in the language the machines use to talk at us.
Right now, the machines are just parroting whatever giant, unruly dataset they've been fed. But soon, they will be side-loaded with a small sample of additional context (e.g. a style/usage reference), so that they can tweak their output with reference to that context.
When that day comes, you will never, ever hear another word from the AI ethics folks about the supposed dangers of large language models (LLMs). They will pivot immediately to the fight to write the style guide that's side-loaded into the LLMs to steer the output.
People are confused (& mislead) re: this tweet, so in service of procrastination I'll break it down on here.
Models need to be really really large (right now) to capture enough of language that their output sounds authentic. Hence large language models (LLMs).
"Large" means 2 things: a large number of parameters & a large dataset. To get the latter, a large dataset, you have to do a giant crawl of some massive corpus of text, probably on the web.
So you're going to soak up a TON of text, & because the net is so wide that dataset...
...will naturally reflect the status quo use of language. Almost definitionally "status quo" as the size approaches infinity. Well, the status quo is "problematic", right? So people want the ability to then sanitize that output so that it is not problematic. They want to steer it
I'm listening to a guy explain an AI paper & I just learned a new German phrase: "they want the the egg-laying wool milk pig," which means roughly the same as "they want every child to have a pony" or some such,.
(Obviously "egg-laying wool milk pig" is one word in the original German.)
I wish we had this in English, but in a shorter version, like "the Omni-pig."
"Yeah, these guys want the Omni-pig. It does milk, wool, eggs, pork, all for free."
Reading this now, but the thing that jumps out at me is the aesthetics of CEOs. No matter how awkward & nerdy u looked as a lower-ranking geek, when u ascend to CEO they dip you in a vat & then hoist you out & sandblast you & air dry you. It's wild.
The main exception here is Zuck, who still looks mostly like an awkward, greasy undergrad. He has somehow avoided the CEO vat of rejuvenation and chiseling.
A thing that puzzles me: people who spend a lot of time obsessing over power, but who seem unwilling to acknowledge that there are different kinds of it.
Me: X has power
You: LIES! X DOES NOT CONTROL THE ALABAMA STATE LEGISLATURE!
One of the great things about the old populist tradition was they had language around a "financial power" or a "money trust", which was different than just a political power. There's also cultural power that rests in centers of academia & media.
So I find myself in these conversations re: power w/ people who seem to really, truly believe the "marginalization" language literally, as in there is only 1 single page, & the center of that 1 page == "power" while the margins == "not power".