Not Only Those Chasing Tenure Hallucinate, But Some Citations Are Wonky Too

April 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “ChatGPT Hallucinates Fake But Plausible Scientific Citations at a Staggering Rate, Study Finds.” Wow. “Staggering.” The write up asserts:

A recent study has found that scientific citations generated by ChatGPT often do not correspond to real academic work

In addition to creating non-reproducible research projects, now those “inventing the future” and “training tomorrow’s research leaders” appear to find smart software helpful in cooking up “proof” and “evidence” to help substantiate “original” research. Note: The quotes are for emphasis and added by the Beyond Search editor.

image

Good enough, ChatGPT. Is the researcher from Harvard health?

Research conducted by a Canadian outfit sparked this statement in the article:

…these fabricated citations feature elements such as legitimate researchers’ names and properly formatted digital object identifiers (DOIs), which could easily mislead both students and researchers.

The student who did the research told PsyPost:

“Hallucinated citations are easy to spot because they often contain real authors, journals, proper issue/volume numbers that match up with the date of publication, and DOIs that appear legitimate. However, when you examine hallucinated citations more closely, you will find that they are referring to work that does not exist.”

The researcher added:

“The degree of hallucination surprised me,” MacDonald told PsyPost. “Almost every single citation had hallucinated elements or were just entirely fake, but ChatGPT would offer summaries of this fake research that was convincing and well worded.”

My thought is that more work is needed to determine the frequency with which AI made up citations appear in papers destined for peer review or personal aggrandizement on services like ArXiv.

Coupled with the excitement of a president departing Stanford University and the hoo hah at Harvard related to “ethics” raises questions about the moral compass used by universities to guide their educational battleships. Now we learn that the professors are using AI and including made up or fake data in their work?

What’s the conclusion?

[a] On the beam and making ethical behavior part of the woodwork

[b] Supporting and rewarding crappy work

[c] Ignoring the reality that the institutions have degraded over time

[d] Scrolling TikTok looking for grant tips.

If you don’t know, ask You.com or a similar free smart service.

Stephen E Arnold, April 26, 2024

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta