Generative AI in academic writing: Is it worth the risk?
There’s one question that more and more students and academics have been asking me in my work as an editor and academic writing coach: is it okay to use generative AI to help me write my assignments and papers?
I haven’t quite made up my mind on whether there’s a place for generative AI in academic research – I think it’s possible there are ethical uses for certain features of ChatGPT and its ilk and offshoots, particularly to support scholars with disability and scholars writing in a second language – but I am always quick to caution my clients/students about using AI to write or interpret or reflect or do any of the analytical or creative activities that are our contribution to knowledge when we write up our research.
My reasons for this relate to academic integrity, by which I mean both the ethical stance of being a scholar with integrity and the need to avoid doing the things that will land you an accusation of academic misconduct (which are interrelated, but not exactly the same!).
If you outsource your writing or your thinking to ChatGPT or QuillBot or similar, and you submit or publish those ideas without acknowledging and explaining your use of generative AI platforms (even if you paraphrase), you are representing work that you did not create as your own work. This is academic misconduct, and consequences range from the retraction of a published paper to reputational and career damage for academics, and from a fail grade on an assignment to having your degree revoked (rare, but not impossible) for students.
So if you’re a student and you submit work containing AI-written text for an assignment or thesis (including your PhD), and down the track it becomes known that you did this (either through better detection platforms or some other information being revealed), you may be risking your degree. I don’t know how likely it is that the use of generative AI will be discovered or how likely it is that your university will revoke your qualification. No one can truly answer those questions yet, but these are the possibilities we’re working through.
For early career academics, and academics in general, the risks are significant and the likely consequences more public. There are many people in the world today who struggle to trust researchers when they communicate their findings, and I think we all recognise the vulnerability to misinformation that arises when we don’t know whose commentary we can trust. Misrepresentations in our work – including representing the words of ChatGPT as our own – further erode that trust. So there are societal risks that, for me, are considerably more important than the personal risks, which are nonetheless serious and potentially career-destroying.
Even if you do acknowledge and explain your use of AI, these risks remain. No one I’ve spoken to has really figured out where the line between acceptable and unacceptable uses of generative AI lies yet, so it’s impossible to know whether you’re going to end up on the right side of that line as we develop our sense of ethics in AI. There are also many scholars – particularly the senior academics who hold powerful positions and can make or break ECR careers – who are suspicious of AI. If they read your declaration that you’ve used it, even just for ‘revising’ text, you’ll likely go down in their estimation.
I do have a lot of thoughts on AI that are not quite so doom and gloom, and maybe I’ll write about those another day. But generally, when my clients and the students I work with ask me if they should use generative AI in their papers, I respond with two questions: 1) do you fully understand the risks you’re taking? and 2) is it worth it?