Cannot recommend this dissection of how the media is blowing it again from @JamesFallows enough. It covers a lot of ground, and not only diagnoses the problem, but offers solutions. theatlantic.com/ideas/archive/…
Fallows' analogy to Mueller's approach is spot on. The press is playing by rules largely of its own invention that Trump and others (like Barr) recognize as phantoms, and easily gamed. If outlets don't respond to this, they will continue to get played.
The specific examples that @JamesFallows uses to critique press tics like both sides-ism and horse-race-ism, should be taught in schools, and not just to journalists. They exemplify the critical thinking all writers should be comfortable doing.
It's just tremendous, careful, and caring work by @JamesFallows, and yet I can't help but think it's not going to make a difference. The institutions are beyond reforming themselves, even as there's occasional glimmers of hope.
Still, this kind of writing is well worth doing for its own sake. It reaffirms the values we should attach to our writing of careful analysis in pursuit of saying things that are supportable and accurate. The living example is important. I have to continue to believe that.
Even if nothing changes now, at least we have the artifact that proves people were sounding the alarm at the time, that hindsight was not necessary to see the error. This is important.
This piece can serve a similar purpose to @JamesFallows article on Iraq as the "51st state" a questioning of the impulse to invade Iraq published four months before the actual invasion. theatlantic.com/magazine/archi…
All of the subsequent problems we experienced in Iraq are discussed in the article. The information and knowledge existed to act differently and make different and better choices. The article didn't change the outcome, but it still matters. It still matters.
If you can't tell this whole thread is a self pep talk trying to convince myself to keep pushing on my belief that the only way out for public higher ed is to go tuition-free. I have doubts that the energy to make this happen will gather. beltpublishing.com/collections/pr…
But even if the worst comes to pass for post-secondary education, at least there will be some artifacts (my book and others) showing that there could've been other paths to the future. It wasn't impossible to do the right thing.
One of the things you have to note about the @JamesFallows article is how far back the problem goes and how many people have been speaking out on it. @jayrosen_nyu@froomkin@Sulliview Numerous others.
The problem is that for the most part, the critics of what Fallows identifies are viewed as apostate journalists by those who hold the levers of editorial power and control. Even those who write for mainstream outlets have their views discounted when it comes to actual operations
When I see this pattern of behavior, I have to believe the problem is clearly structural. It's not just about putting different people with different attitudes into positions of leadership. We need incentives that align with the mission of delivering "news."
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The study cited here is a perfect example of the horrible turn we made in teaching students to write. Whether or not ChatGPT can give comparable feedback to humans is immaterial when the underlying activity is so meaningless. We have to examine this stuff with more depth.
The study is a perfect example of what I call "academic cosplay" activities which resemble learning, but which really involve a performance of "schooling." If the goal is to learn to write, the assignment is meaningless, the feedback (both human and ChatGPT) is meaningless.
Check out the example. This is not a prompt that has any relationship to judging student writing ability. Writing involves acting inside a rhetorical situation. This is using sentences to judge subject comprehension. Perfectly fine, nothing to do with learning to write.
Growing more and more convinced that if you're going to find any use for LLMs as an aid to writing you need to have a robust and flexible writing practice that works without using AI, suggesting having students integrate AI into their processes without these conditions is flawed.
A huge complicating factor, as articulated by @Marc__Watkins is that these tools now just show up asking to be used without being actively summoned. Students quite sensibly see them as a normalized part of the writing process.
@Marc__Watkins The result, and I think this is true of many if not most of the instructor experiments I've seen in integrating AI, is that the tool is used to complete an assignment without consideration for what is learned and built in doing so. This is not an unusual behavior when students...
I am increasingly distressed at how generative AI is being treated as a presence in student writing. To meet this huge challenge we have to get at the roots of what we want students to do and why. Thread of what I've been thinking & an invitation for other thoughts.
First, I believe that the primary byproduct of ChatGPT is to show us that much of the work students do in school is probably not worth doing if the goal is helping students learn. If ChatGPT can do it, it's not worth doing. insidehighered.com/opinion/blogs/…
Next, I believe generative AI has made it even more important to make students work at their writing. The key is to make sure they're actually writing, not just doing academic cosplay that passes for the purpose of having something to grade. insidehighered.com/opinion/blogs/…
If writing is thinking (as I believe it to be) the last thing you want to outsource to AI is the first draft because that's where the initial gathering of thoughts happens. It's why first drafts are hard. That difficulty signals their importance.
When I see ed tech people say that students can use AI to "get over the hump" of the first draft I know that they understand nothing about learning to write. They've invited students to abandon the most important part of the experience when it comes to learning.
Writing is not just a process by which to produce a text. It is a process through which we think, feel, communicate, and learn. Outsourcing the production of text to something that can do none of those things is really just saying none of that stuff matters. The output is all.
Every time GPT "passes" some kind of test as compared to humans it mostly reveals the test is bad and we should rethink the human activity, rather than outsourcing the bad thing to GPT. That GPT can replicate generic feedback on student writing is unsurprising and not useful.
There was some thing where GPT aced an MBA without prompting or training, and that should've just alerted everyone to the academic cosplay that is much of MBA studies. That degree was invented in order to extract money from people already in white collar jobs.
Of course GPT can do fill in the blank feedback from a rubric. It's been trained on reams of writing that fits that criteria. But that writing IS NOT GOOD. We have to get beyond students doing writing imitations for school and let them write as writers do.