Media Tip Sheet: U.S. Senate Lawmakers Hold Hearing on AI Regulation

July 25, 2023

U.S. Congress Building

U.S. Senate lawmakers are holding another meeting on artificial intelligence, this time bringing in AI startup Anthropic's CEO Dario Amodei to testify before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. This afternoon’s hearing is related to AI regulation.

GW faculty experts are available to offer insight, analysis and commentary on the ongoing efforts to regulate AI as well as trustworthy and responsible AI. To interview an expert, please contact GW Media Relations Specialist Cate Douglass at [email protected].

GW's Susan Aaronson

Susan Ariel Aaronson, research professor of international affairs, is the director of GW’s Digital Trade and Data Governance Hub and co-PI of a newly launched, NSF-funded institute called TRAILS, which explores trustworthy AI. Under the TRAILS research initiative, Aaronson is using her expertise in data-driven change and international data governance to lead one of the institute’s research arms in participatory governance and trust. In all, her research focuses on AI governance, data governance, competitiveness in data-driven services such as XR and AI and digital trade. She is an expert at evaluating AI governance at the national and international levels, whether that be the governance of the technology, risk, business practices or data. She can discuss the latest efforts to try to regulate AI and the importance of incorporating an array of voices in conversations around AI regulation.

GW's David Broniatowski

David Broniatowski, an associate professor of engineering management and systems engineering, is GW’s lead principal investigator for TRAILS. Broniatowski is leading the institute’s third research arm of evaluating how people make sense of the AI systems that are developed, and the degree to which their levels of reliability, fairness, transparency and accountability will lead to appropriate levels of trust. He can discuss trustworthy AI and how AI systems can be used in ways that cause risk, like disinformation.