This is a very thoughtful reflection by @zephoria --- and it's striking to be offered so much inside info about the CTL/Loris debacle --- but it also doesn't fully connect the dots. A few thoughts/questions:

zephoria.org/thoughts/archi…
@zephoria boyd steps the reader through how the organization went from handling data for continuity/quality of service to texters (allowing someone to come back to a conversation, handoffs between counselors, connection to local services) to using data for training counselors to >>
@zephoria using data for internal research for using data for vetted research by external partners as one thread. That last step feels rickety, but still motivated by the organization's mission and the fact that the org didn't have enough internal resources to all the beneficial research.
@zephoria Another thread seems to be: data for training counselors to counselors benefit from that training in their lives more broadly to can we offer this training to others (for a fee) to make money to sustain the core mission? >>
@zephoria One thing that's not clear in the essay is why any data would need to be shared outside the org for that purpose, even if the training is done by a separate, for-profit entity. How is the data used in that (external) training?
@zephoria But more importantly, the essay doesn't get to the question of how Loris came to be licensed to use the data to train "grammarly for emotion" in customer service contexts:

@zephoria I read this essay with curiosity to see how the inside view would complicate that moment and to see if I could spot where it felt like things went off the rails (with full benefit of hindsight & boyd's own reflection). The former wasn't provided (essay didn't get there), but:
@zephoria Reading along, this is where it felt, to me, like things went off the rails: Screen cap from the essay under discussion, reading "Mo
This seems to be predicated on a notion of scale (and implicitly, on the idea that masses of data can let us do things on a scale that individual people working with data individually can't) that echoes all of the things we hear out of Silicon Valley (disruption etc) >>
and in opposition to boyd's own mantra about "responsible scaling". There's a jump in there from "how do we make this thing we're doing more sustainable and accessible to more people?" to "how do we (use this data to) change society?"
Would I be able to tell if I were making that same kind of jump in real time? I have no idea. I think it should have been obvious to anyone who heard "grammarly for emotion" that something had gone terribly wrong and needed fixing. But boyd's essay doesn't cover that part.
It is very valuable to have @zephoria 's insider perspective here, and I think there is a lot to learn from this. My initial primary takeaway is that this underscores the risks of data aggregation:
Aggregated data is more attractive to business interests and is (maybe?) harder for us as individuals to understand on a visceral level as requiring the same fierce protection as its individual components do.
Maybe this is analogous to the saying that one death is a tragedy but a million deaths are a statistic? Anyway, there is definitely a lot to learn here about how to cultures of care around data that are robust to the siren call of "scale", in all its guises.
How to build* cultures of care. (sigh)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with ❄️Emily M. Bender❄️

❄️Emily M. Bender❄️ Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @emilymbender

Jan 30
So, my point here was that "building public trust" only makes sense as a research goal in circumstances where such public trust is in fact a public good.

>>
For example, public trust in effective vaccines (like the ones we have for COVID-19 and many other diseases) is important, because widespread vaccination brings about public goods (lower burden of disease, less pressure on the healthcare system).

>>
Likewise, public trust in democratic systems (to the extent that they are truly democratic) brings about public goods, because broad voter participation makes for better governance.

>>
Read 7 tweets
Jan 25
💯 this! Overfunding is bad for the overfunded fields, bad for researchers in the overfunded fields, and bad for fields left to starve, and bad for society as a result of both of those.

>>
Re bad for the field, see @histoftech 's tweet and the tweet by @ChristophMolnar they are QT-ing.

>>
Re bad for researchers in the overfunded fields, see all the discourse around how do we keep up with arXiv??



>>
Read 11 tweets
Jan 11
Quick! Think of an example of word sense ambiguity in English other than bank/river bank/financial institution!
(Mostly this is just commentary on how over-used that one example is, but I'm also kind of curious what people come up with.)
And if you want to read more about the ways in which words can have multiple senses, check out Ch 4 of Bender & Lascarides 2019:

morganclaypoolpublishers.com/catalog_Orig/p…
Read 4 tweets
Dec 2, 2021
“I’ve been frustrated for a long time about the incentive structures that we have in place and how none of them seem to be appropriate for the kind of work I want to do,” -- @timnitGebru on the founding of @DAIRInstitute

washingtonpost.com/technology/202…
@timnitGebru @DAIRInstitute “how to make a large corporation the most amount of money possible and how do we kill more people more efficiently,” Gebru said. “Those are […] goals under which we’ve organized all of the funding for AI research. So can we actually have an alternative?”
bloomberg.com/news/articles/…
“AI needs to be brought back down to earth,” said Gebru, founder of DAIR. “It has been elevated to a superhuman level that leads us to believe it is both inevitable and beyond our control. >>
Read 5 tweets
Dec 2, 2021
A few thoughts on citational practice and scams in the #ethicalAI space, inspired by something we discovered during my #ethNLP class today:

>>
Today's topic was "language variation and emergent bias", i.e. what happens when the training data isn't representative of the language varieties the system will be used with.

The course syllabus is here, for those following along:
faculty.washington.edu/ebender/2021_5…

>>
Week by week, we've been setting our reading questions/discussion points for the following week as we go, so that's where the questions listed for this week come from.

>>
Read 14 tweets
Nov 4, 2021
"Bender notes that Microsoft’s introduction of GPT-3 fails to meet the company’s own AI ethics guidelines, which include a principle of transparency" from @jjvincent on the @verge:

theverge.com/2021/11/2/2275…
@jjvincent @verge In a bit more detail, here's what Microsoft says in their blog:

blogs.microsoft.com/ai/new-azure-o…

>> Screen cap of Microsoft blog reading: "That’s why Mic
@jjvincent @verge The principles are well researched and sensible, and working with their customers to ensure compliance is a laudable goal. However, it is not clear to me how GPT-3 can be used in accordance with them.

>>
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

:(