Wow this article covers a lot of ground! Seems like a good way for folks interested in "AI ethics" and what that means currently to get a quick overview.
"Ethics in AI is essentially questioning, constantly investigating, and never taking for granted the technologies that are being rapidly imposed upon human life.
That questioning is made all the more urgent because of scale."
"A company making [self-driving] technology, TuSimple, of San Diego, CA, is going public on Nasdaq. In its IPO prospectus, the company says it has 5,700 reservations so far in the four months since it announced availability of its autonomous driving software for the rigs."
I have about 5,700 reservations too! In fact, I could not find the intended reading of 'reservations' in that sentence until I read a bit further and saw 'pre-orders'.
I'd feel a lot more optimistic about the potential safety benefits of autonomous vehicles if they weren't being promoted by the tech bros:
"Determining the positives of autonomous driving is made more complicated by tug of war between industry and regulators. Tesla's Musk has taken to tweeting about the safety of his company's vehicles, often second-guessing official investigations."
"According to some scholars who've spent time poring over data on ethics, a key limiting factor is that there isn't enough quantitative data."
HAI's AI Index authors write: "the field generally lacks benchmarks that can be used to measure or assess the relationship between broader societal discussions about technology development and the development of the technology itself."
It's possible that this quote is out of context, but it sure sounds like "How are we supposed to optimize on ethics if no one will build us a benchmark?" ... talk about everything looking like a nail.
At the end of the article, though: "All the scholarship on data sets and algorithms and bias and the rest points to the fact that the objective function of AI takes shape not on neutral ground but in a societal context." Which is key.
The article also spends a few paragraphs on 😬 AGI:
"At the same time, some have argued that it is precisely the lack of AGI that is one of the main reasons that bias and other ills of conventional AI are so prevalent."
"The Parrot paper by Bender et al. asserts that the issue of ethics ultimately comes back to the shallow quality of machine learning, its tendency to capture the statistical properties of natural language form without any real "understanding.""
This makes it sound like we think if only we had AGI, there wouldn't be any problems. I couldn't disagree more.
I think a better statement would be that one of the sources of risk in LLMs is that they are presented and perceived as engaging in meaningful discourse, when they don't.
Finally, the article explores a few ways in which "AI" is being applied, in biomedical domains (looking for potential drug/disease combos to test), in mitigating climate change, and others.
"Tim O'Reilly, the publisher of technical books used by multiple generations of programmers, believes problems such as climate change are too big to be solved without some use of AI."
I think the term "AI" is obfuscating things here. I'd really like a more direct statement of what techniques are being used and how their outputs are being contextualized. Calling this stuff "AI" suggests an autonomy that isn't there.
This contributes to: "Perhaps the greatest ethical issue is one that has received the least treatment from academics and corporations: Most people have no idea what AI really is. The public at large is AI ignorant, if you will."
The article lays some of the blame at the feet of "sycophantic journalism", but I think that a big portion also goes to academics, other researchers, VCs, and entrepreneurs promoting #AIhype.
Among other things I really appreciate how Timnit is unerasing the contribution of our retracted co-authors and how key their contributions & perspectives were to the Stochastic Parrots paper.
@timnitGebru And so much else: @timnitGebru is absolutely brilliant at drawing connections between the research milieu, research content, geopolitics and individual, situated lived experience.
@timnitGebru On interdisciplinarity and the hierarchy of knowledge:
“If you have all the money, you don’t have to listen to anybody” —@timnitgebru
On professional societies not giving academic awards to harassers, "problematic faves", or bigots, a thread: /1
Context: I was a grad student at Stanford (in linguistics) in the 1990s, but I was clueless about Ullman. I think I knew of his textbook, but didn't know whether he was still faculty, let alone where. /2
I hadn't heard about his racist web page until this week. /3
Better late than never, I suppose, but as one of the targets of his harassment, I could have wished that you didn't embolden him and his like in the first place, nor sit by for two months while this went on.
And it's not just about "people who don't want to be contacted" FWIW. He's been spamming all kinds of folks with derogatory remarks about @timnitGebru, me, and others who stand up for us.
@timnitGebru I should have kept a tally of how much time I've spent over the last two months dealing with this crap because it pleased @JeffDean to say in a public post that our paper "didn't meet the bar" for publication.
@timnitGebru He sent it to me this morning too. I just archived it without reading it, because I figured there would be nothing of value there. (I wasn't wrong.) What is it with this guy? As you say, @sibinmohan why does he think Google needs his defense?
@timnitGebru@sibinmohan And more to the point: @timnitGebru and @mmitchell_ai (and @RealAbril and others) your shedding light on this and continuing to do so brings great value. How can we get to better corporate (& other) practices if the injustices are not widely known?