WASHINGTON (June 5, 2024)-- The Washington Post reported that “A letter signed by current and former OpenAI, Anthropic and Google DeepMind employees asked firms to provide greater transparency and whistleblower protections.” The letter signed on Tuesday warns that AI poses great threats and risks to humanity.
Faculty experts at the George Washington University are available to provide context, commentary and analysis on what to expect within the Senate in coming months. If you would like to speak to an expert, please contact the GW Media Relations team at [email protected].
Law
Alicia Solow-Niederman, Associate Professor of Law at George Washington University Law School. Solow-Niederman is an expert in the intersection of law and technology. Her research focuses on how to regulate emerging technologies, such as AI with an emphasis on algorithmic accountability, data governance and information privacy. Solow-Niederman is a member of the EPIC Advisory Board and has written and taught in privacy law, government use of AI and the likes.
Spencer Overton is the Patricia Roberts Harris Research Professor of Law at the George Washington University Law School. Overton is an expert on voting rights, the legality and threats of election deepfakes, AI and voting rights, and multiracial democracy. Overton has recently testified in front of the Subcommittee of Cybersecurity, Information, Technology, and Government Innovation, U.S. House Committee of Oversight and Accountability on the “Advances in Deepfake Technology” and frequently comments on the threats and advances in the field.
AI/Technology Innovation
Ethan Porter is an associate professor of media and public affairs and of political science at George Washington University. He holds appointments in the School of Media and Public Affairs and the Political Science Department and is the Cluster Lead of the Misinformation/Disinformation Lab at GW's Institute for Data, Democracy and Politics. His research has appeared or is forthcoming in Proceedings of the National Academy of Sciences, Journal of Politics, British Journal of Political Science, Political Behavior, Political Communication and other journals.
Patrick Hall, teaching assistant professor of decision sciences, teaches data ethics, business analytics, and machine learning classes. Prior to joining the GW School of Business, Patrick co-founded BNH.AI, a boutique law firm focused on AI governance and risk management. He led H2O.ai's efforts in responsible AI, resulting in one of the world's first commercial applications for explainability and bias mitigation in machine learning. Hall also conducts research in support of NIST's AI risk management framework and is affiliated with leading fair lending and AI risk management advisory firms. He can discuss topics related to building trustworthy AI, bias in AI systems, and AI regulation efforts, among other AI-related issues.
Politics
Todd Belt is the director of the Political Management Program at the GW Graduate School of Political Management. Belt is an expert on the presidency, campaigns and elections, mass media and politics, public opinion, and political humor. In addition to his expertise, Belt is co-author of four books and helps to run GW’s political poll, which recently shared new findings.
Danny Hayes, professor of political science, is an expert on campaigns and elections who can discuss the current election landscape and provide insights and analysis on current campaign strategies.
Mis/Disinformation & Trustworthy AI
David Broniatowski, associate professor of engineering management and systems engineering, is GW’s lead principal investigator of a relatively new NSF-funded institute called TRAILS that explores trustworthy AI. In this role, Broniatowski is leading one of the institute’s research goals in evaluating how people make sense of the AI systems that are developed, and the degree to which their levels of reliability, fairness, transparency and accountability will lead to appropriate levels of trust. He also conducts research in decision-making under risk, group decision-making, system architecture, and behavioral epidemiology. Broniatowski can discuss a number of topics related to AI’s role and use in spreading misinformation as well as efforts to combat misinformation online, including the challenges of tackling misinformation and how messages spread.
Neil Johnson, professor of physics, leads a new initiative in Complexity and Data Science which combines cross-disciplinary fundamental research with data science to attack complex real-world problems. He is an expert on how misinformation and hate speech spreads online and effective mitigation strategies. Johnson recently published new research on bad-actor AI online activity in 2024. The study predicts that daily, bad-actor AI activity is going to escalate by mid-2024, increasing the threat that it could affect election results.
AI Governance
Susan Ariel Aaronson, research professor of international affairs, is the director of the Digital Trade and Data Governance Hub and co-PI at the NSF Trustworthy AI Institute, TRAILS, at the George Washington University. Her research focuses on AI governance, data governance, competitiveness in data-driven services such as XR and AI and digital trade. She can discuss the legislation and ongoing efforts to try to regulate artificial intelligence by both the E.U. and U.S. Congress.
-GW-