Media Tip Sheet: AI's Dark Side: The Mounting Risks of Reliance on Artificial Intelligence


July 8, 2025

WASHINGTON (July 8, 2025) – As AI adoption accelerates in the workplace, questions remain about which groups of workers are most vulnerable. Some argue that entry-level employees face the greatest risk, since their roles often involve repetitive, easily automated tasks. Others suggest that younger workers might adapt more easily to AI tools and that more experienced professionals could struggle, according to reporting from The New York Times.

Another headline story this week, and an example of AI-driven misinformation, discussed an individual impersonating U.S. Secretary of State Marco Rubio, contacting foreign ministers, a governor, and a member of Congress. While the motive is not clear, authorities believe it may have been an attempt to gain access to secure systems or influence official actions.

The George Washington University has experts available to comment on all aspects of these stories. To schedule an interview please contact Claire Sabin at claire [dot] sabinatgwu [dot] edu (claire[dot]sabin[at]gwu[dot]edu) or Shannon Mitchell at shannon [dot] mitchellatgwu [dot] edu (shannon[dot]mitchell[at]gwu[dot]edu).

James Bailey, a professor and Hochberg Fellow of Leadership Development at the George Washington University School of Business, is a global expert on leadership and organizational behavior. He has advised Fortune 500 firms and government leaders on how emerging technologies like AI impact workplace dynamics.

Patrick Hall, a teaching assistant professor of decision sciences at the George Washington University School of Business, is available to comment. Hall is a national leader in AI governance and co-founder of BNH.AI, a boutique law firm focused on AI risk. He has helped shape responsible AI frameworks at companies like H2O.ai and through federal initiatives such as NIST’s AI Risk Management Framework.

Neil Johnson, a physics professor, has developed a mathematical formula to identify the “Jekyll-and-Hyde tipping point” in AI, i.e. when the system becomes overstretched and starts producing misinformation or harmful content. This model could eventually help keep AI tools trustworthy and prevent such tipping points.

Alicia Solow-Niederman, Associate Professor of Law at George Washington University Law School. Solow-Niederman is an expert in the intersection of law and technology. Her research focuses on how to regulate emerging technologies, such as AI with an emphasis on algorithmic accountability, data governance and information privacy. Solow-Niederman is a member of the EPIC Advisory Board and has written and taught in privacy law and government use of AI.

-GW-