In testimony before the Senate Judiciary Subcommittee on Privacy, Technology and the Law on Tuesday (May 16), OpenAI CEO Sam Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”
The hearing, titled “Oversight of AI: Rules for Artificial Intelligence,” also featured IBM Vice President and Chief Privacy and Trust Officer Christina Montgomery — who is on the steering committee for the Notre Dame-IBM Technology Ethics Lab — and Gary Marcus, founder of Geometric Intelligence, a machine learning company.
In response to fast-paced and dramatic changes to the AI landscape, the federal government has also recently announced a $140 million investment to create seven new research institutes, and the White House is expected to issue guidance in the next few months on how federal agencies can use AI tools.
In light of these developments, University of Notre Dame experts reflect on the opportunities, concerns and impacts of AI on different fields — including entertainment and media, the arts, politics, the labor market, education and business. Several of these faculty are affiliates of the interdisciplinary Notre Dame Technology Ethics Center, which, among other initiatives, offers the University's undergraduate minor in tech ethics.
John Behrens: First step in addressing AI concerns is education
“Artificial intelligence is a type of software, and the more people treat it that way — rather than as some robotic being — the better off we will be,” said John Behrens, director of technology initiatives for Notre Dame’s College of Arts and Letters. “But we need to support education at all levels to get there. The questions society is facing because of AI are not only ethical but involve all the liberal arts: What are the economic impacts? What are the psychological impacts? What questions does this human-like fluency in language raise for issues of philosophy and theology? Notre Dame has a unique opportunity to bring to bear the full range of the liberal arts to help society tackle these issues.”
Nicholas Berente: Investing in research of AI's use, impacts and required guardrails is key
“Depending on how you define it, AI has been around for more than a half century,” said Nicholas Berente, professor of information technology, analytics and operations. “What is new — and what has people concerned — is the rather unbelievably rapid pace of recent advancements in AI. As soon as we get used to one set of capabilities, there is a new generation that surpasses them dramatically. The recent wave of generative chat technologies, such as ChatGPT by OpenAI and Bard by Google, have caught on like wildfire. People immediately found uses, for good and for bad, and the power of these generative tools has terrified many and has led to some curious decisions.”
Ahmed Abbasi: AI's major challenge is striking balance between innovation, precaution
“When it comes to technology, we know that regulation and governance often lag behind,” said Ahmed Abbasi, the Joe and Jane Giovanini Professor of IT, Analytics and Operations. “Case in point, the internet, mobile, social media, cryptocurrencies, etc. NIST recently came out with their AI risk management framework. The key components of the framework are to create a culture of governance, and then to map, measure and manage – all with the goal of supporting responsible AI tenets such as fairness, privacy and transparency."
Christine Becker: AI’s impact on the Hollywood labor market
“While AI concerns alone probably won’t sustain long-term labor strife if producers give significant ground on monetary issues, it is possible the only force preventing the dramatic disempowerment of creative labor by artificial intelligence in just a few short years will be the strength and unity of organized labor this summer,” said Christine Becker, an associate professor of film, television and theater.
Sarah Edmands Martin: Can AI expand our understanding of creativity?
“The recent accessibility of AI raises a number of philosophical, ethical and political questions for the art world: For example, does AI hasten the automation of the creative industries, while exploiting vast archives of human labor?” said Sarah Edmands Martin, an assistant professor of art, art history and design. “After all, AI programs like Midjourney and Dall-E use human-generated artworks as training data without compensation, which many people — fairly — fear disempowers or displaces workers in creative industries.”
Lisa Schirch: AI has ability to undermine or potential to unite
“AI has the potential to both help humans make better decisions together, or to undermine our ability to solve problems,” said Lisa Schirch, the Richard G. Starmann Sr. Endowed Chair at the Kroc Institute for International Peace Studies and professor of the practice in the Keough School of Global Affairs.
Yongsuk Lee: AI can stunt or complement the labor market
“Much of the attention and research have been focused on the development of these new large language models (LLMs), but relatively less research has been done on how we should use LLMs, the societal consequences of these models and potential policy recommendations,” said Yong Suk Lee, an assistant professor in the Keough School of Global Affairs. "For now, to be competitive in the labor market, workers may need to be proficient in using and prompting LLMs and, at the same time, be familiar with their limitations.”
Tim Weninger: How will AI affect public trust?
“I foresee an increasingly skeptical population and, with that, a consolidation of trust in information spaces," said Tim Weninger, the Frank M. Freimann Associate Professor of Engineering and director of graduate studies in computer science and engineering. "A smaller and smaller set of organizations will have and hold public trust, and it will become increasingly more difficult for new organizations to build trust among a consumer base.”
Panos Antsaklis: AI and unintended consequences of inaccuracies, limitations
"These are serious concerns that media companies and government agencies need to address as soon as possible," said Panos Antsaklis, the H. Clifford and Evelyn A. Brosey Professor in the Department of Electrical Engineering. "Generating inaccurate information may not always be malicious, but it can be generated because of a lack of understanding the limitations of these software programs.”
MEDIA CONTACT
Register for reporter access to contact detailsRELEVANT EXPERTS
Timothy Weninger
Associate Professor of Engineering
University of Notre DameAhmed Abbasi
Professor of IT, Analytics, and Operations
University of Notre DameSarah Edmands Martin
Assistant Professor, Art, Art History, and Design
University of Notre DamePanos Antsaklis
Professor of Electrical Engineering
University of Notre DameYong Suk Lee
Assistant Professor of Technology, Economy, and Global Affairs
University of Notre DameLisa Schirch
Lisa Schirch, the Richard G. Starmann Sr. Endowed Chair at the Kroc Institute for International Peace Studies and professor of the practice in the Keough School of Global Affairs
University of Notre Dame