Media Tip Sheet: Your Bot is Hallucinating


April 14, 2026

WASHINGTON (April 14, 2026) – As more industries adopt AI tools, hallucinations remain a primary concern. From errors in legal documents to mistakes in simple math equations, the quality of outputs generated by large language models is not guaranteed. 

Neil Johnson, professor of physics at the George Washington University, has researched the reason behind these hallucinations. His work indicates that as AI’s attention gets stretched thin, outputs can abruptly shift from accurate and helpful to incorrect, misleading, or even harmful. Ultimately, this work could help build more trustworthy systems and help policymakers and the public better understand when and how to rely on AI tools.

If you would like to schedule an interview with Professor Johnson, please contact Claire Sabin at claire [dot] sabinatgwu [dot] edu (claire[dot]sabin[at]gwu[dot]edu).

-GW-