AI-Powered AI Plagiarism Detectors Are Now Hallucinating AI Where There Isn’t Any
Yo dawg, we heard you like AI, so we put AI on your AI until your economy collapsed.
Published 1 month ago in Wtf
If you managed to leave school before the advent of ChatGPT (and thus still know how to read and write), you might not realize just how much the service has disrupted education. Basically everyone is using it — students to write papers, professors to grade them, and even emails between students and professors are all mediated through an LLM.
Given this, some of the more honest professors are trying to keep their students doing actual work by testing all of their submissions for AI use. The problem? These AI-detectors are also powered by AI — and now, that AI is beginning to break.
Users of AI detection tools are saying that, either due to quirks in the submissions or the changing dataset of the AI detection tool’s AI, the tools have begun to hallucinate AI where there wasn’t any before.
While this was always a problem, it’s gotten worse. In the early days, these detectors would just give you an estimated likelihood that a paper was generated using AI. Now, they’re using their AI to “verify” sources — and, in the process, either making up new sources or claiming that real sources don’t exist.
In short, prompting one of these bots to check your paper for plagiarism sends it into a spiral of hallucination that makes it near-wholly ineffective. Great!
So, if you’re a teacher looking to avoid plagiarism, the unfortunate best course of action is to have your students do everything with pen and paper.