So I followed @GaryMarcus's suggestion and had my undergrad class use ChatGPT for a critical assignment. I had them all generate an essay using a prompt I gave them, and then their job was to "grade" it--look for hallucinated info and critique its analysis. *All 63* essays had
hallucinated information. Fake quotes, fake sources, or real sources misunderstood and mischaracterized. Every single assignment. I was stunned--I figured the rate would be high, but not that high.
The biggest takeaway from this was that the students all learned that it isn't fully reliable. Before doing it, many of them were under the impression it was always right. Their feedback largely focused on how shocked they were that it could mislead them. Probably 50% of them
were unaware it could do this. All of them expressed fears and concerns about mental atrophy and the possibility for misinformation/fake news. One student was worried that their neural pathways formed from critical thinking would start to degrade or weaken. One other student
opined that AI both knew more than us but is dumber than we are since it cannot think critically. She wrote, "I’m not worried about AI getting to where we are now. I’m much more worried about the possibility of us reverting to where AI is."
I'm thinking I should write an article on this and pitch it somewhere...
• • •
Missing some Tweet in this thread? You can try to
force a refresh