So, my point here was that "building public trust" only makes sense as a research goal in circumstances where such public trust is in fact a public good.
For example, public trust in effective vaccines (like the ones we have for COVID-19 and many other diseases) is important, because widespread vaccination brings about public goods (lower burden of disease, less pressure on the healthcare system).
>>
Likewise, public trust in democratic systems (to the extent that they are truly democratic) brings about public goods, because broad voter participation makes for better governance.
>>
But when we're talking about automated systems like large language models (and other #PSEUDOSCI aka #SALAMI), the key question isn't trust, but trustworthiness.
>>
If someone tells you the point of their research is to build trust in AI/SALAMI/PSEUDOSCI they're selling you something ... or more likely, they're selling you (your data, your trust) to someone else, and it's not about public goods.
(For anyone reading this thread curious why SALAMI and PSEUDOSCI, see this tweet:
This is a very thoughtful reflection by @zephoria --- and it's striking to be offered so much inside info about the CTL/Loris debacle --- but it also doesn't fully connect the dots. A few thoughts/questions:
@zephoria boyd steps the reader through how the organization went from handling data for continuity/quality of service to texters (allowing someone to come back to a conversation, handoffs between counselors, connection to local services) to using data for training counselors to >>
@zephoria using data for internal research for using data for vetted research by external partners as one thread. That last step feels rickety, but still motivated by the organization's mission and the fact that the org didn't have enough internal resources to all the beneficial research.
💯 this! Overfunding is bad for the overfunded fields, bad for researchers in the overfunded fields, and bad for fields left to starve, and bad for society as a result of both of those.
“I’ve been frustrated for a long time about the incentive structures that we have in place and how none of them seem to be appropriate for the kind of work I want to do,” -- @timnitGebru on the founding of @DAIRInstitute
@timnitGebru@DAIRInstitute “how to make a large corporation the most amount of money possible and how do we kill more people more efficiently,” Gebru said. “Those are […] goals under which we’ve organized all of the funding for AI research. So can we actually have an alternative?” bloomberg.com/news/articles/…
“AI needs to be brought back down to earth,” said Gebru, founder of DAIR. “It has been elevated to a superhuman level that leads us to believe it is both inevitable and beyond our control. >>
A few thoughts on citational practice and scams in the #ethicalAI space, inspired by something we discovered during my #ethNLP class today:
>>
Today's topic was "language variation and emergent bias", i.e. what happens when the training data isn't representative of the language varieties the system will be used with.
Week by week, we've been setting our reading questions/discussion points for the following week as we go, so that's where the questions listed for this week come from.
"Bender notes that Microsoft’s introduction of GPT-3 fails to meet the company’s own AI ethics guidelines, which include a principle of transparency" from @jjvincent on the @verge:
@jjvincent@verge The principles are well researched and sensible, and working with their customers to ensure compliance is a laudable goal. However, it is not clear to me how GPT-3 can be used in accordance with them.