Is There a Problem with AI Detection Software?

July 1, 2024

Of course not.

But colleges and universities are struggling to contain AI-enabled cheating. Sadly, it seems the easiest solution is tragically flawed. Times Higher Education considers, “Is it Time to Turn Off AI Detectors?” The post shares a portion of the new book, “Teaching with AI: A Practical Guide to a New Era of Human Learning” by José Antonio Bowen and C. Edward Watson. The excerpt begins by looking at the problem:

“The University of Pennsylvania’s annual disciplinary report found a seven-fold (!) increase in cases of ‘unfair advantage over fellow students’, which included ‘using ChatGPT or Chegg’. But Quizlet reported that 73 per cent of students (of 1,000 students, aged 14 to 22 in June 2023) said that AI helped them ‘better understand material’. Watch almost any Grammarly ad (ubiquitous on TikTok) and ask first, if you think clicking on ‘get citation‘ or ‘paraphrase‘ is cheating. Second, do you think students might be confused?”

Probably. Some universities are not exactly clear on what is cheating and what is permitted usage of AI tools. At the same time, a recent study found 51 percent of students will keep using them even if they are banned. The boost to their GPAs is just too tempting. Schools’ urge to fight fire with fire is understandable, but detection tools are far from perfect. We learn:

“AI detectors are already having to revise claims. Turnitin initially claimed a 1 per cent false-positive rate but revised that to 4 per cent later in 2023. That was enough for many institutions, including Vanderbilt, Michigan State and others, to turn off Turnitin’s AI detection software, but not everyone followed their lead. Detectors vary considerably in their accuracy and rate of false positives. One study looked at 14 different detectors and found that five of the 14 were only 50 per cent accurate or worse, but four of them (CheckforAI, Winston AI, GPT-2 Output and Turnitin) missed only one of the 18 AI-written samples. Detectors are not all equal, but the best are better than faculty at identifying AI writing.”

But is that ability is worth the false positives? One percent may seem small, but to those students it can mean an end to their careers before they even begin. For institutions that do not want to risk false accusations, the authors suggest several alternatives that seem to make a difference. They advise instructors to discuss the importance of academic integrity at the beginning of the course and again as the semester progresses. Demonstrating how well detection tools work can also have an impact. Literally quizzing students on the school’s AI policies, definitions, and consequences can minimize accidental offenses. Schools could also afford students some wiggle room: allow them to withdraw submissions and take the zero if they have second thoughts. Finally, the authors suggest schools normalize asking for help. If students get stuck, they should feel they can turn to a human instead of AI.

Cynthia Murrell, July 1, 2024

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta