The Mythology of ChatGPT in Higher Ed
One tends to see a lot of moral panic about ChatGPT in Higher Education (specifically around assessment) at the moment. I was too young to see a proper real-world example of moral panic, e.g. the satanic panic in America in the 80s/ 90s, but I've always had a fascination with it. Which is why reaction to ChatGPT is such an interesting phenomenon to observe at the moment.
Given ChatGPT and similar tools have become such boogeymen recently, I've found it useful to consider it through the lens of some other myths and folklore.
The Pùca
I guess in older times ChatGPT would be a Pùca or a Changeling of some sort. It could be anything and anywhere. Always potentially present but utterly undetectable. Any and every assignment a student submits could be one. On the surface a thing in its proper place in the world, but in actuality, a sinister, indistinguishable copy of an original - summoned from the other side of the ether.
I've seen no small number of suggestions on how to manage the creature. Some are reflected in research (Ahsan et al, 2021; Zawacki-Richter et al, 2019; Cotton et al, 2023; Zhai, 2022); more are wailing laments to peers. Suggestions have ranged from assessing students only through written exams in lecture halls (regardless of subject matter - from coding to medicine to mechanics); or oral exams where students will be verbally interrogated about what they do or do not know; or one-off, on-the-spot demonstrations. All condensing the knowledge and work of full semesters into maybe a half hour one-off event. Maybe 90 minutes if the student's lucky.
I'm not unsympathetic to overcoming cheating and it's not to say that these measures would not necessarily do so. But I'm also not sure exactly whom such solutions are really meant to help. If the answer to beating ChatGPT is to revert to a more archaic type of assessment, that doesn't really align or tie-into the real-world, I wonder if it's not a fairly pyrrhic victory? And certainly one that students -- particularly students who now have to manage the on-the-spot stress of a lecture hall or mini-Viva -- will pay the price for.
Pandoras Box
I also am not sure if the drive to remove digital development from any and all assessment mechanisms is the right way of looking at it. While it's not quite yet Skynet, ChatGPT is indicative of a new reality and there's no point pretending that reality is -- if not here already -- then coming up quick.
Hesiod, concluding the story of Pandora's box, warns: "So is there no way to escape the will of Zeus". And even if you could, to what end? We can dodge ChatGPT and similar tools all we want in higher education, but they will not disappear once students finish their education and enter the wider world. These tools, if anything, are lurking outside those university walls, lined up to fundamentally change the world of how we work. By moving them backwards in terms of assessment and activity, students are even less prepared to navigate this world.
I'm not for a moment suggesting that accepting or relying on ChatGPT for creation is the best or default option. If leaving writing and art -- some of our most most human ventures -- to the machines is the path we're on, we may as well pack up and head home. But rather than going backwards in how we deal with these tools, wouldn't we serve students better by figuring out together how best to use -- or even better, overcome -- these tools to get them ready for what's beyond the walls of the university?
Students are smart, and they live and think in the real world. I think that helping students learn about the real-world (and not just 'academic') ethical implications -- in particular the exploitation of workers in Kenya -- surrounding these tools could be far more effective at turning students away from ChatGPT than any amount of academic finger-wagging and head-shaking. Donna Lanclos and Lawrie Phipps do just this in their excellent piece "An offering", which I strongly recommend.
He Who Must Not Be Named
Considering the temptation presented by ChatGPT to students within the specific confines of those university walls though, there's a deeper and uglier issue at play but which I never see talked about. Much like Voldemort is monikered "He Who Must Not Be Named", it's not spoken about because to name it is to name certain unpleasant truths. But that naming is this - in using tools like ChatGPT, students are just doing what they have been programmed to do since an early age within the enmeshed productivity cult we've created.
Here in Ireland, kids are told from an early age -- implicitly and/ or explicitly -- that the main goal of their education is to get the maximum amount of points. Forget doing art or music for the leaving cert, they don't guarantee points. Pick those subjects where you can remember all the facts and regurgitate them onto a few sheet of paper within a 2 hour timeframe. That's what gets you into college. That's what gets you approval. That's what all this is for.
So if points are all that matter and ChatGPT or contract cheating or essay mills gets you the points as efficiently as possible, isn't it smarter and -- to use the hallowed term -- more productive to do just that? Isn't that what we've taught students to do from early on?
If the issue -- which is my main issue -- with these AI tools is that they ultimately create a less human world, doesn't the allure and apparent effectiveness of ChatGPT to students say something about the systems of education we've created? And/ or the messages we pass to students about what we value and expect them to do within these systems?
The Great Flood
When I hear talk about how tools like ChatGPT can be used, it strikes me that we're rarely talking about technological stuff. More often than not, we're talking about people stuff. If we take these tools seriously as agents of change in how we work, teach, etc. -- and the panic I see certainly indicates that is so -- we're talking about technological revolution. And you can't -- or at least shouldn't -- separate the technological impact from the impact on people.
Like any technological revolution, people and practices will most likely be displaced and disrupted. Done rightly and considered properly, this could be an opportunity to change the systems we operate in to elevate everyone. To use the technology to reduce manual or ineffective (note: not inefficient) work to free up more time for deeper learning, more collaboration, better things. Done badly and selfishly, we're all suddenly living in an episode of Black mirror.
Either way, Pandora's box is opened. This is the world we must exist in now. But maybe by asking the right questions and striving to keep the right values, we can at least try to decide what this world will be like for the people who have to live in it.
References
Ahsan, K., Akbar, S., & Kam, B. (2022). Contract cheating in higher education: a systematic literature review and future research agenda. Assessment & Evaluation in Higher Education, 47(4), 523-539.
Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education–where are the educators?. International Journal of Educational Technology in Higher Education, 16(1), 1-27.
Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2023). Chatting and Cheating. Ensuring academic integrity in the era of ChatGPT.
Zhai, X. (2022). ChatGPT user experience: Implications for education.