My department asked me to give a brief talk about #GPT to them to help folks learn more about what's going on. Here are my #tips in distilled form.
1. Make #GPT part of the classroom. Students know about it, students are using it, bring it into the discussion. Don't pretend it doesn't exist!
2. Take GPT for a test run on your material. How does GPT do answering your questions? Test it on class discussion questions, tests, essays. Get comfortable with what it sounds like but also how your assignments fare in a GPT world.
3. Assess GPT for accuracy with your students. Students are often unaware of the hallucination problem, i.e. that GPT makes up information (including citations). Show them when it goes off the rails and when to trust it (if ever).
4. Use GPT as a tool of critical reflection. Sometimes I ask GPT questions and then we analyze the answers together. It can be productive to look at someone else's writing (even if someone else is an AI).
5. Decide on your AI writing policy. (See mine at the end of this thread.) AI can be used for brainstorming, writing, and editing. Decide how you feel about the use of AI for each of these activities.
5a. What are your trying to assess? When can the technology help students be more thoughtful and creative and when does it substitute for creativity and thinking?
6. Treat AI writing like any other form of plagiarism. AI generated text is citable discourse, i.e. students are not allowed to include it in their assignments without proper citation. It doesn't have to be complicated. If you're concerned, then just do more in class writing.
7. Remember there are a spectrum of opinions out there. Some say we should be training students to utilize AI to solve problems. Thus, they should be learning with AI in all of their assignments, including writing. twitter.com/i/web/status/1…
8. Others feel that AI pollutes the growth and development of student thinking. We need to keep it out of their hands until they've demonstrated proficiency in a given area. We don't have data yet to support either of these positions. For now keep an open mind in both directions!
9. Modularize your assignments so you can see student work at every stage. This is more work but break down writing into discrete parts so they can engage in more long term reflective and constructive thinking (i.e. the hallmark of writing). Maybe we just need to dedicate more… twitter.com/i/web/status/1…
10. Share your experiences. Everyone is in the dark on this. We're all affected. Share successful and unsuccessful exercises and share when you've had concerns about the authenticity of student work. What led to it and what did you do?
11. Be sympathetic towards the ambiguity of the situation! Students face a very uncertain intellectual & disciplinary landscape when it comes to #AI (teachers too!). Give them the benefit of the doubt. You are helping them (and you) become more comfortable with this technology.
12. Here is the current #AI disclosure agreement I have students include with written assignments. First draft for sure.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Curious if this analogy seems useful or misleading for people. Just emailed a student with this advice and thought I would share more widely. On how to organize writing in "cultural analytics". 🧵
2/ The main thing to focus on is asking yourself what are the primary contributions that you want to make to your chosen area of cultural behavior? If you had to boil everything down to a few salient points, what would they be?
3/ Given this focus, try to run (and explain to your readers) as many tests as you need to get clarity about these issues (where each test either functions as “further confirmation”, “testing a possible confounder” or “adding nuance”).
Funny the @nytimes covered this today, but I had an honest conversation with my students today about #chatGPT. 🧵
2/ I showed them how it can be used to answer questions they don't know the answer to. For example, I asked them to explain "type-token ratio" which was discussed in an article we were reading. They went silent. So I let #chatGPT do the talking.
3/ It's important to remember that #chatGPT is novel technology for students too. They don't know much about the tech, how to use it or when to trust it. So my fist goal is doing some demos so they can see how they might benefit from it.
in my #AI and #Literature class we will be spending the next 2 weeks diving into AI co-writing tools. My first impression is that the world just cleaved in two. 🧵
2/ People are right to doubt the hype *right now*. The tools are limited in terms of actual quality. This is what we aim to explore in class. How "good" are these things at which tasks?
3/ But it's very clear that they are close enough to suggest that in some not very distant future they will be good enough to integrate into your writing process. Nature is already saying that's *now*. nature.com/articles/d4158…
1) So in addition to being kinda upset by @netflix's The Chair (see previous tweet), I really do just see a really straightforward fork in the road for the future of literary studies. +
2) Door #1 says we can choose to do whatever it is we have been doing for the past two decades where we have only seen decline. +
3) Since you'd have to be crazy and/or delusional to believe things will get better this way, the only question here is whether the decline curve flattens to a new (much lower) normal or whether it is terminal.
1) here is a summary of a new paper I have out with Sunyam Bagga on measuring bias in literary classification. txtlab.org/2021/02/measur…
2) the goal of the paper was to see how much biased training data might impact the automated classification of texts. We use the prediction of "fiction" as our case study since it is something we are often trying to do!
3) the basic finding (surprising for us) was that only in the most extreme cases did biased training data have an effect on predictive accuracy or the balanced nature of subclasses within "fiction."