Holy moly is this some shameful stuff from @HdxAcademy whitewashing Campus Reform's bad faith assaults on faculty speech and pattern of engendering harassment against faculty. This is really beneath an organization that claims to support free inquiry. heterodoxacademy.org/blog/condemnin…
The author even acknowledges Campus Reform's self-stated aim to intimidate faculty that they characterize as "leftist thugs" and yet this it tut-tutted away because really, we gotta make sure we keep an eye on them liberals.
I mean, look at this garbage. This is essentially saying: "Sure, the study showed that appearing in Campus Reform results in threats and harassment, but on the other hand, what if those faculty deserved it?"
Look at the "when did you stop beating your wife" logic of this passage. Amazing. This is ridiculous @HdxAcademy. Aren't you supposed to be a bunch of big-time academics?
Campus Reform is a partisan propaganda outfit which monetizes outrage for clicks. They are not a good-faith interlocutor in the discussion of viewpoint diversity (gag) in academia. That @HdxAcademy publishes an article that fails to acknowledge this clear reality is ridiculous.
I've tried to engage with @HdxAcademy folks in good faith, and there's been many productive individual exchanges, but as an organization, they're garbage: anti-speech and anti-faculty if this is the kind of message they're choosing to put in the world.
It seems clear that @HdxAcademy ad an organization is comfortable with intimidation and harassment of faculty, provided that harassment is directed at liberal professors. If they weren't, why publish this article?
Hey, @JonHaidt, chair of @HdxAcademy, can I get a what what on whether or not you think Campus Reform is a legitimate source of campus news and information or a partisan propaganda outfit that monetizes hate and intimidation for clicks. This is their stated mission.
An organization that engenders death threats against college faculty and yet @HdxAcademy wants to make sure we don't throw out the baby with the bathwater. You people are absurd. Why do you hate academia so much?
Here's the click farm model Campus Reform operates under @jonhaidt.
Here's the story of a professor who had to flee her home because of the harassment ginned up by Campus Reform. Your organization is laundering the reputation of this outfit, @JonHaidt. You good with that?
The professor who had to flee her home is now self-censoring her public speech because of a fear of harassment. Is this the goal of @HdxAcademy's work on free speech and viewpoint diversity, @JonHaidt?
Hey, look, @jonhaidt is worried about keeping polarization out of the office. Meanwhile, the org whose board he chairs is laundering the work of a partisan propaganda outfit that causes faculty to flee their homes. You not checking in on the home office, my man?
Why do I let myself get worked up over something I'm never going to change? I hate myself sometimes.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The study cited here is a perfect example of the horrible turn we made in teaching students to write. Whether or not ChatGPT can give comparable feedback to humans is immaterial when the underlying activity is so meaningless. We have to examine this stuff with more depth.
The study is a perfect example of what I call "academic cosplay" activities which resemble learning, but which really involve a performance of "schooling." If the goal is to learn to write, the assignment is meaningless, the feedback (both human and ChatGPT) is meaningless.
Check out the example. This is not a prompt that has any relationship to judging student writing ability. Writing involves acting inside a rhetorical situation. This is using sentences to judge subject comprehension. Perfectly fine, nothing to do with learning to write.
Growing more and more convinced that if you're going to find any use for LLMs as an aid to writing you need to have a robust and flexible writing practice that works without using AI, suggesting having students integrate AI into their processes without these conditions is flawed.
A huge complicating factor, as articulated by @Marc__Watkins is that these tools now just show up asking to be used without being actively summoned. Students quite sensibly see them as a normalized part of the writing process.
@Marc__Watkins The result, and I think this is true of many if not most of the instructor experiments I've seen in integrating AI, is that the tool is used to complete an assignment without consideration for what is learned and built in doing so. This is not an unusual behavior when students...
I am increasingly distressed at how generative AI is being treated as a presence in student writing. To meet this huge challenge we have to get at the roots of what we want students to do and why. Thread of what I've been thinking & an invitation for other thoughts.
First, I believe that the primary byproduct of ChatGPT is to show us that much of the work students do in school is probably not worth doing if the goal is helping students learn. If ChatGPT can do it, it's not worth doing. insidehighered.com/opinion/blogs/…
Next, I believe generative AI has made it even more important to make students work at their writing. The key is to make sure they're actually writing, not just doing academic cosplay that passes for the purpose of having something to grade. insidehighered.com/opinion/blogs/…
If writing is thinking (as I believe it to be) the last thing you want to outsource to AI is the first draft because that's where the initial gathering of thoughts happens. It's why first drafts are hard. That difficulty signals their importance.
When I see ed tech people say that students can use AI to "get over the hump" of the first draft I know that they understand nothing about learning to write. They've invited students to abandon the most important part of the experience when it comes to learning.
Writing is not just a process by which to produce a text. It is a process through which we think, feel, communicate, and learn. Outsourcing the production of text to something that can do none of those things is really just saying none of that stuff matters. The output is all.
Every time GPT "passes" some kind of test as compared to humans it mostly reveals the test is bad and we should rethink the human activity, rather than outsourcing the bad thing to GPT. That GPT can replicate generic feedback on student writing is unsurprising and not useful.
There was some thing where GPT aced an MBA without prompting or training, and that should've just alerted everyone to the academic cosplay that is much of MBA studies. That degree was invented in order to extract money from people already in white collar jobs.
Of course GPT can do fill in the blank feedback from a rubric. It's been trained on reams of writing that fits that criteria. But that writing IS NOT GOOD. We have to get beyond students doing writing imitations for school and let them write as writers do.