in my #AI and #Literature class we will be spending the next 2 weeks diving into AI co-writing tools. My first impression is that the world just cleaved in two. 🧵
2/ People are right to doubt the hype *right now*. The tools are limited in terms of actual quality. This is what we aim to explore in class. How "good" are these things at which tasks?
3/ But it's very clear that they are close enough to suggest that in some not very distant future they will be good enough to integrate into your writing process. Nature is already saying that's *now*. nature.com/articles/d4158…
4/ This has huge implications for creativity, scientific writing, intellectual property and, yes, student assessment. What kind of value systems will emerge given the idea that "writing" no longer means "human only"?
5/ The potential for bad actors is massive. The automated generation of human-like text with malevolent goals. It will make our current internet look quaint, which is of course terrifying.
6/ Will it have an inequality effect? Only those with training / knowledge of AI tools can utilize them effectively. Notice it starts at "Nature." Or will it democratize because you don't need the same educational background to get the AI to do the work for you?
7/ Yes, I have so many questions around AI-generated text. One thing I am certain of is that it will have transformational effects. As instructors and researchers of text, we need to get working on this!
• • •
Missing some Tweet in this thread? You can try to
force a refresh
1) So in addition to being kinda upset by @netflix's The Chair (see previous tweet), I really do just see a really straightforward fork in the road for the future of literary studies. +
2) Door #1 says we can choose to do whatever it is we have been doing for the past two decades where we have only seen decline. +
3) Since you'd have to be crazy and/or delusional to believe things will get better this way, the only question here is whether the decline curve flattens to a new (much lower) normal or whether it is terminal.
1) here is a summary of a new paper I have out with Sunyam Bagga on measuring bias in literary classification. txtlab.org/2021/02/measur…
2) the goal of the paper was to see how much biased training data might impact the automated classification of texts. We use the prediction of "fiction" as our case study since it is something we are often trying to do!
3) the basic finding (surprising for us) was that only in the most extreme cases did biased training data have an effect on predictive accuracy or the balanced nature of subclasses within "fiction."