AI and the purpose of education
I wrote about AI in academic writing the other day, cautioning academic writers against using generative AI in their work for submission or publication for reasons of academic integrity. I am seeing a lot of students and early career academics using AI in their assignments, which is understandable (particularly for NESB students and students with disability and students with various literacy needs). But the uses of generative AI in academic writing that I’ve seen raise some really big questions for me about the way we do education – this is beyond plagiarism cases and reflects the way we think about the purpose of education.
A lot of people think about education as a means to an end: if I get X qualification, I can get Y job. Or if I do this degree, my parents will be proud. If I finish my PhD, I get to call myself doctor. These aren’t terrible reasons to study, but they do lack any sense that there is a purpose to learning beyond what it can get you when it’s over. Tending to view education as qualifications often leads to a sense that learning is done, finished. (When, in fact, every career requires that we continue to learn as we work and by continuing to study to update our skills and knowledge.)
If the purpose of education is to get through your course and walk away with your qualification, the process of actually learning recedes into the background. So using generative AI to get work done makes sense – it gets you through the course quicker and with less effort. This is the root cause of my discomfort with generative AI in academic writing: I cannot bring myself to believe that ‘shortcuts’ are good in this context, because I believe learning should involve messy, sometimes confusing, often challenging, always transformative growth.
My other biggest problem with AI-produced work is that I don’t actually care what AI has to say about the topic of your assignment/thesis/etc. AI can – by its (current) nature – only reproduce/rehash/restructure stuff that has already been said. It does this in a quite soulless and predictable way, and it’s my opinion that human students can always write better than AI because human students can be human. If the thing we’re measuring in assessment is a student’s capacity to submit formulaic sentences with no new ideas and no genuine insight, then our approach to assessment is rubbish. And pointless.
Ultimately, I think we need to spend more time thinking about learning as a goal in and of itself and less time emphasising what it can get you when it’s done. If we fail to do that, then we’re acting as though the purpose of education is to mould students into carbon copies of an ideal that really only ever upholds the status quo. And if there was ever a time when we needed to be challenging the status quo, it’s now.