WASHINGTON (March 24, 2026) – If you’re ever dreamt of granting an AI assistant the ability to operate your computer on your behalf, opening files, navigating systems, and completing tasks with minimal input, then you’re in luck. Anthropic’s Claude was just given the keys to your computer. Well, if you give it permission, as it has been programmed to ask for permission before performing all tasks.
An ongoing concern with agentic AI is the potential for dramatic actions with little warning, including the potential for it to be hijacked by malicious actors, who can use your personal data and systems in ways you don't want.
Neil Johnson, a physics professor at the George Washington University, studies large language models and the complex risks and rewards associated with their growing autonomy. He can speak to what this shift toward “agentic” AI means, the potential security implications of giving AI systems access to personal computers, and how individuals and organizations can better protect themselves as these technologies evolve.
To schedule an interview with Professor Johnson, please contact Claire Sabin at claire [dot] sabin
gwu [dot] edu (claire[dot]sabin[at]gwu[dot]edu).
-GW-