For what it's worth, some additional context about a survey that the viewpoint diversity crowd is crapping themselves over this evening. Took me all of 10 minutes to look a little harder at the data and see how, as always, things are more complicated than they 1st appear.
First, this survey is produced by a NDSU university center that's funded largely by Koch money. There's dozens of these around the country. It's not nefarious, but they have a clear agenda behind their work. ndsu.edu/news/view/deta…
Second regarding that question about how students would respond to "offensive" speech, it's designed to be as vague as possible. There's no definition of what "offensive" is. It's purely in the eye of the beholder. An intentionally bad question.
If the survey were truly interested in understanding what's behind these attitudes, it would ask a simple open ended question of those who would report a professor for offensive speech. "Can you give an example of offensive speech that should be reported?"
But the survey doesn't ask that question, so we'll never know what students are thinking when they answer yes or no to reporting offensive speech. Lost opportunity there. Lost on purpose because it wouldn't be useful to the narrative.
There's all kinds of findings even in this dubious survey that tell a different story. For ex. the vast majority of students say "uncomfortable" material is fine.
An overwhelming majority of students support including readings they disagree with.
Vast majority of students don't think invited speakers should be withdrawn if students disagree with the speaker's views.
Here's a really interesting wrinkle. Students who believe that professors create a climate open to diverse views are MORE likely to say that offensive speech by a fellow student should be reported. Not sure how to interpret this, but it cuts against the snowflake narrative.
Another interesting tidbit. Liberal and Conservative students are nearly identical in believing things have gotten better over the last 50 years. I gets the libs haven't been taught to hate the country.
Liberal and Conservative students are nearly identical in the effect college has on their view of the future.
Major source of student optimism is their professors. Little difference by political affiliation.
Liberal students are more enthusiastic about the role of entrepreneurs than conservative students.
This should have the Kochs quaking in their boots. A full 1/3 of conservative students don't think capitalism can solve major societal problems.
Lastly, the sample is not particularly representative. White students are overrepresented (60% should be about 50%), as are private school students (38%, should be about 25%).
It's a shame this survey was written by bought and sold hacks because with a little more curiosity and care, they really could've told us something interesting. As is, it still is worth puzzling over, rather than jerking our knees to a single question/answer.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The study cited here is a perfect example of the horrible turn we made in teaching students to write. Whether or not ChatGPT can give comparable feedback to humans is immaterial when the underlying activity is so meaningless. We have to examine this stuff with more depth.
The study is a perfect example of what I call "academic cosplay" activities which resemble learning, but which really involve a performance of "schooling." If the goal is to learn to write, the assignment is meaningless, the feedback (both human and ChatGPT) is meaningless.
Check out the example. This is not a prompt that has any relationship to judging student writing ability. Writing involves acting inside a rhetorical situation. This is using sentences to judge subject comprehension. Perfectly fine, nothing to do with learning to write.
Growing more and more convinced that if you're going to find any use for LLMs as an aid to writing you need to have a robust and flexible writing practice that works without using AI, suggesting having students integrate AI into their processes without these conditions is flawed.
A huge complicating factor, as articulated by @Marc__Watkins is that these tools now just show up asking to be used without being actively summoned. Students quite sensibly see them as a normalized part of the writing process.
@Marc__Watkins The result, and I think this is true of many if not most of the instructor experiments I've seen in integrating AI, is that the tool is used to complete an assignment without consideration for what is learned and built in doing so. This is not an unusual behavior when students...
I am increasingly distressed at how generative AI is being treated as a presence in student writing. To meet this huge challenge we have to get at the roots of what we want students to do and why. Thread of what I've been thinking & an invitation for other thoughts.
First, I believe that the primary byproduct of ChatGPT is to show us that much of the work students do in school is probably not worth doing if the goal is helping students learn. If ChatGPT can do it, it's not worth doing. insidehighered.com/opinion/blogs/…
Next, I believe generative AI has made it even more important to make students work at their writing. The key is to make sure they're actually writing, not just doing academic cosplay that passes for the purpose of having something to grade. insidehighered.com/opinion/blogs/…
If writing is thinking (as I believe it to be) the last thing you want to outsource to AI is the first draft because that's where the initial gathering of thoughts happens. It's why first drafts are hard. That difficulty signals their importance.
When I see ed tech people say that students can use AI to "get over the hump" of the first draft I know that they understand nothing about learning to write. They've invited students to abandon the most important part of the experience when it comes to learning.
Writing is not just a process by which to produce a text. It is a process through which we think, feel, communicate, and learn. Outsourcing the production of text to something that can do none of those things is really just saying none of that stuff matters. The output is all.
Every time GPT "passes" some kind of test as compared to humans it mostly reveals the test is bad and we should rethink the human activity, rather than outsourcing the bad thing to GPT. That GPT can replicate generic feedback on student writing is unsurprising and not useful.
There was some thing where GPT aced an MBA without prompting or training, and that should've just alerted everyone to the academic cosplay that is much of MBA studies. That degree was invented in order to extract money from people already in white collar jobs.
Of course GPT can do fill in the blank feedback from a rubric. It's been trained on reams of writing that fits that criteria. But that writing IS NOT GOOD. We have to get beyond students doing writing imitations for school and let them write as writers do.