‘I massively regret using AI to cheat at uni’

A young person wearing a grey hoodie is sitting on a bed in a student bedroom with computer screens on a desk in the backgroundImage source, Ben Moore/BBC

Image caption,

Hannah used an AI tool in her first year at university after being ill with Covid

Ben Moore

BBC South East Investigations Team

A student is warning others about the potential consequences of using generative artificial intelligence (AI) to cheat at university.

Hannah, not her real name, has spoken to BBC South East about her “massive regret” at using AI tools to help her write an essay when she was ill with Covid.

“We had two deadlines really close together and I just ran out of steam,” she said.

Hannah faced an academic misconduct panel, who have the power to expel students found to be cheating. Her case highlights the challenge that universities face as they encourage students to become AI literate, whilst discouraging cheating.

Hannah said: “I felt incredibly stressed and just under enormous pressure to do well. I was really struggling and my brain had completely given up.”

Her misuse of AI was discovered when her lecturer routinely scanned her essay using detection software.

“My stomach was in knots and I was sitting outside the office. I was like ‘this was really stupid’.”

Hannah was cleared as a panel ruled there wasn’t enough evidence against her, despite her having admitted using AI.

Hannah says she thinks it was a slap on the wrist designed to serve as a warning to other students.

“I could have been kicked out,” she said.

What is AI?

Generative AI is technology that enables a computer to think or act in a more human way.

It does this by taking in information from its surroundings, and deciding the best response based on what it learns.

Universities have been trying to understand what AI applications are capable of and introduce guidance on how they can be used.

Media caption,

The BBC’s Ben Moore writes an essay using AI, but will he be found out?

Benefits of AI

Dr Sarah Lieberman, reader in politics and international relations at Canterbury Christchurch University said: “I have noticed it creeping in, not necessarily in terms of whole essays, but as chunks of essays quite often.

“It doesn’t necessarily match the rest of the text. If someone has used it for a whole essay it won’t be well stuck together, it’s not been written as one piece,” she said.

“To a lecturer who is used to student work, it’s like hearing the voice from an Alexa, rather than the voice of your husband or children in the kitchen. We can spot that robot voice.

“They don’t write good essays, they are not critical thinkers.”

Dr Lieberman says there are circumstances where students can benefit from using AI tools.

“If we can teach them how to use it – maybe to pose initial questions to get lists of literature that they can then go and look at – then it’s a really worthwhile thing to have.”

Image source, Ben Moore/BBC

Image caption,

Lecturer Sarah Lieberman says she has spotted the signs of AI use in some essays

Some universities ban the use of AI unless specifically authorised, while others allow AI to be used to identify errors in grammar or vocabulary, or permit generative AI content within assessments as long as it is fully cited and referenced.

At a bar on the outskirts of Canterbury students here know the limits, and say they only use AI as an aid, like they might a search engine.

Taylor says: “You’ve got to embrace it. You can ask it questions and it helps you out.

“You can use it to create a guide to structure your work. It’s good for exam prep too.”

Myah says: “I’ve never been one to use it, I’m not too keen. I’d rather just do the work myself, then I can be like ‘I’ve done it’, but I know a lot of people do use it.”

Zyren said she fell out with a friend who used it extensively: “They openly admitted to me they use AI, full on copied and pasted an essay they got from Chat GPT. A part of me felt annoyed as it hit me that they might get a higher score than me.”

Image source, Ben Moore/BBC

Image caption,

Tommy Hills says students still need to fact check when they use generative AI tools

Tommy Hills, a teacher and freelance computer science lecturer, says AI is still in its infancy.

“There is something that we call ‘hallucinations’, and it’s the idea AI just makes something up,” he said.

“It’s important that when we are using this technology, we are using it in the same way we would use any other academic source, the internet, books, fact check, don’t entirely trust it”.

University exam answers generated by (AI) could be difficult to spot by even experienced markers, a study has found.

The University of Reading research saw AI-generated answers submitted to examiners on behalf of 33 fake students.

Results showed it was “very difficult” to detect.

Universities UK, an organisation of vice-chancellors and principals of universities, said: “Universities are aware of the potential risks posed by AI tools in the context of exams and assessment.

“[They] all have codes of conduct that include severe penalties for students found to be submitting work that is not their own, engaging with students from day-one on the implications of cheating and how it can be avoided.”

The Quality Assurance Agency, external, which reviews standards at UK universities, says it’s a balancing act between maintaining academic integrity, while equipping students with AI skills they can use in the workplace.

A Department for Education spokesperson said: “Generative AI has great potential to transform the Higher Education sector and provides exciting opportunities for growth. However, integrating it into teaching, learning, and assessment will require careful consideration.

“Universities must determine how to harness the benefits and mitigate the risks to prepare students for the jobs of the future.”

Related topics

More on this story