Tim Fawns Profile picture
Mar 1 42 tweets 8 min read
At @CRADLEdeakin and @TEQSAGov webinar on #ChatGPTHigherEd - how should educators respond?

Firstly, @margaret_bea raises the need to include students in discussions and to look for a diverse range of voices. Yes!
Side note: I’m hearing a lot of people say “ChatGPT is just a tool”. Beware instrumentalism – technologies aren’t neutral things that humans use to their desired ends. They-and-we, in combination, reshape society and the world.
Margaret says machines don't construct quality and we need students to develop evaluative judgement. Complex, contextual and disciplinary judgements.

ChatGPT not good at this... yet
Adapting assessments for now:
- critically review rubrics (more nuanced ideas of quality)
- tasks where students make and defend contextualised judgements
- assess how students change their work based on evaluative judgements
Q's from panel
Lucinda: how to get around students using chatGPT to produce evaluative judgements

Margaret: focus isn't catching students out but working on rubrics to elicit human, contextualised quality. Trust educators to set parameters of assignments to promote complexity.
From Simon:
Can ask ChatGPT to critique from a conceptual framework (and results are impressive)

Margaret: it can help scaffold evaluative judgement but remains limited and needs students to build on its responses.
Rola: Is this just a matter of good assessment design?

Margaret: Yes and no. Hopefully a shift in focus towards human capabilities. We need to look at where our priorities in assessment might change.
Margaret now discussing "good writing" - we need to monitor how this unfolds, how writing is changing in relation to tech. This will change our notion of writing standards.
Now @LucindaMcKnigh8 on writing and AI.

Students (and everyone) have been using AI writers for years. We are way behind in our thinking about this issue.

ChatGPT is a user-friendly interface on an engine that was already pretty powerful.
Is the essay dead? Will humans just tinker with AI text?

ChatGPT is a distraction, just one form of AI writing that will be (more) intimately woven into our techs so that human/AI writing becomes indistinguishable. And anyway, editing takes great writing skill.
My interpretation of what Lucinda is saying: The idea of citing ChatGPT like any other source doesn’t make sense in this context. We need to build creativity into assessment to make good use of this integration of human / AI
Simon: Should we be asking students to do things ChatGPT can do or not?

Lucinda: both but where can we find the time for it all?

Lucinda says Academia's needs to align with what’s happening in industry and teach students to use these things..
and therefore teachers need to be experts in this tech (and in assessment design, adds Rola). Where do we find the time?

Hmm.. Not sure this is the most useful way to think about this.. I think we need to learn how to teach things we ourselves don't know.
By that I mean that we might move away from the model where the teacher is supposed to know the answers but instead is good at supporting students to ask questions and undertake inquiry. What do you think?
Now @sbuckshum on effective ethical engagement with ChatGPT. Wants to move from aspiration to evidence.

Again note: we’ve been working with AI for ages.
Mentions Salomon et al. “partners in cognition”. Partnership needs agency & effort.
Students aren’t pro’s – might not be able to make expert use of this tech in learning.

Need to equip students with knowledge, dispositions, integrity to support good use.

Raises the idea of automated feedback literacy - do students know how to engage with automated feedback?
It quickly become meaningless to ask students to declare each AI use. Simon is looking at process analytics to better understand this partnership.

Uni's need to work quickly on research in generative AI pedagogy.
Simon ends with 3 key things for educators:
- Scaffold variable student capacity to critique ChatGPT output
- Guide students on evidencing critical engagement with ChatGPT
- Fill evidence vacuum with quality research
Rola on relational and affective aspects of feedback. How does AI-generated feedback fit with these ideas? Is it about educator’s scaffolding this AI-generated information?

Simon: we might be overoptimistic about students’ critical engagement faculties. So it needs scaffolding
Now on whether ChatGPT can be a springboard for expressing ideas? Can it make things more inclusive?

Answer: potential benefits and risks 😂
Simon: do we still assume that students need to be able to do work without using technologies?

Lucinda mentions the possibility of removing meaningless labour, are we ok with that?

S: as tech gets more reliable the need to be able to do without it is reduced (right, Simon?)
Getting thirsty, just going to get ChatGPT to finish this thread for me...
Warnings from panel that using ChatGPT to summatively assess student work breaches OpenAI terms and conditions and the student’s data rights.
Teacher POV from @sahoward_uow. Education is slow but resilient. We reshape tech into our practices. But ChatGPT has caught our imagination– good interface, good performance. What AI competencies do educators need? We’re trying to figure out what we want to do with these tech’s.
Humans might be better at critical thinking but that doesn’t mean we’re doing it. How do we scaffold these human capacities alongside these tools?
(as @sbuckshum points out in the chat, some humans aren't good at critical thinking)
(The chat is also right about Sarah's dog, it really is very cute)
Audience Q: how to help students develop capacities for ethical, sustainable & effective use of AI?
MB: focus on coming to know what's ethical and what quality is irrespective of engagement with AI. Hold the line rather than reorienting to the new possibilities of AI?
Simon talks about some educators' refusal to engage with AI as a legitimate response given pressures and conditions. And there are disciplinary considerations. But we might go beyond worries about misconduct but potential to “damage learning”.
Q What kinds of assessment are most at risk?

MB Weaker ones – single answer, right / wrong, questions that allow general (not applied / personalised / contextualised) answers.

Sarah: to some extent, things we could have answered with an internet search already
Sarah says: but there are interesting implications for arts too - what new practices might arise from this situation?

Discussion of levels (standards) of assessment rising - but does that further marginalise those not able to make effective use of AI?
Lucinda mentions changes at school in writing (e.g. to include images) and maybe Higher Ed also needs to embrace multimodal assessment more (though that might not solve AI worries as things evolve).
Sarah: Developing new assessments is labour intensive and needs expertise. (Are we back to the resources / workforce / scalability problem that seems to be at the heart of worries about generative AI – yes, Lucinda now talking about casualisation, etc.)
The question is hinted at of whether effective, laborious prompt engineering is equivalent to doing the work of writing. But even if it is equivalent in terms of effort or even knowledge, does it fit with the kind of knowledge we want (and the purpose, as Margaret now says)?
Q. Does AI generated feedback information get in the way of educators knowing students?
Simon: if quality of info is good enough or equivalent, he can’t see the problem.

Is he missing (or isn’t worried about) the relational element of education? What will @r_ajjawi say?? 😂
Here's Rola suggesting there’s more going on than the information. Don’t teachers need to be involved in the process and dialogue?

Simon: No worries, they can have a conversation with the AI. It's about a system not about individual teachers.

Rola not delighted, I'm guessing.🤔
How to design teaching for a world of ubiquitous AI?

Sarah: The future world might not be about AI or any tool. We can’t focus on specific things but adaptive capacities for uncertainty, change and risk.

Lucinda: ethics will be a big part of this
Rola: what are the 3 next steps you’d take as an educator?

MB: thinking about assessing process vs product. Going back to that question “what is it that we need people to know?” For more immediate changes, critically look at your rubrics
Simon: use AI's capacity to reflect on itself to help you and students develop AI literacy. Learn the limits. Practice.

Lucinda: to stay safe, know basic principles, limitations, terms and conditions, copyright implications. Be careful and ethical.
There endeth another ChatGPT webinar that was very good but that also reinforced the complex, multifaceted and ever-changing nature of the challenges of education. 👋
If you read this and thought "I wish that thread was better", this one from @DrJoannaT is for you:

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tim Fawns

Tim Fawns Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @timbocop

Mar 2
The more I think about it, the more I think that integrity is a red herring. Or maybe it's that we're thinking about the wrong kind of integrity. Or looking in the wrong place for it.
We're stuck on this black and white idea of integrity, and it’s focused on students. The idea that students are faced with a decision whether to cheat or not cheat, and those who act with integrity choose not to cheat.
Black & white ideas of integrity lead down some unpleasant avenues: detection, surveillance, punishment. These roads are built on slippery slopes. Things can escalate quickly and damage relationships between institutions, teachers, students and wider perceptions of education.
Read 6 tweets
Feb 26
A philosophical Monday #feedback quiz

1. A teacher comments on a student’s work. How many feedback loops are happening in each of the following?
- the student never sees it.
- the student sees it but the teacher doesn’t know.
- the student sees it & discusses with the teacher.
2. Is feedback a thing that happens or a thing that is?
3. Is feedback something that promotes learning, or something that describes a process of learning?
Read 18 tweets
Jan 18
Entangled Pedagogy and #ChatGPT 🧵

Can we approach assessment via values (what matters), purposes (what we’re trying to achieve with students) & context (who are our students, what are they studying, at what level? What infrastructure, systems, policies &culture are in place?)
Tech and methods are important but we need to keep a close eye on values, purposes and context because they're easily forgotten in knee-jerk or head-in-sand responses.
Amid the hype &panic around ChatGPT, where should we focus our response? On other technologies? Student attitudes and behaviour? Assessment practices?
There's a temptation to give ChatGPT too much influence in assessment reform. It’s just one of many considerations.
Read 46 tweets
Sep 16, 2022
If technology-enhanced learning (TEL) is a thing, then pens, paper and washing machines should be part of it. But what work does the TEL label do? 🧵

Friday thoughts (with apologies to people who have TEL in the title of their job or Research Centre!)
IMO there’s an assumption that TEL should involve something new that teachers and students haven't done much before. But we also know that getting used to an approach over time is important in terms of efficacy.
Why were things we called technology enhanced learning before the pandemic (e.g. recorded presentations, online quizzes and polls, discussion forums, wikis and uses of social media) thought of as inferior after the pandemic?
Read 22 tweets
Sep 1, 2022
Thoughts on themes & results in #ThematicAnalysis. There are many ways to think about this, here's mine (currently).

Themes don't emerge, but also themes are not your results. They're ways of organising data that help you tell a story about some aspects of your research area.
Themes aren't right or wrong, they're just more or less interesting. Themes like "barriers" and "drivers" aren't *wrong*, they're just usually not conducive to interesting stories. Maybe they suggest a need to keep looking for themes that will help tell a richer story.
Look for themes that you wouldn't have thought of before you started analysing the data. Look for themes that surprise you. That's where the really good stories are.
Read 8 tweets
Jun 16, 2022
On the last day of #UoELTConf22, I'll be chairing Neil Speirs and colleagues on "A quiet, unnoticed form of gentle solidarity" and @sbayne on "The ‘mode 3’ university: Is this our future?" Looking forward to it!
Neil plays confronting video of Edinburgh student stories of exclusion, isolation, loneliness, patronisation, bullying, inferiority complex due to social class. Gathered over 20 years but could still be told today. Accents, wealth, privilege, elitism. #UoELTConf22
Classism involves ongoing sustaining of barriers to access of resources for some, and the easing of access for others. Does university teaching still presuppose and reinforce the privileged upbringing of middle and upper classes? #UoELTConf22
Read 14 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(