
SCHEDULE
May 29, 2025
Absolutely Interdisciplinary will take place in person at the Schwartz Reisman Innovation Campus at the University of Toronto.
Absolutely Interdisciplinary
8:30 AM – 9:00 AM | Registration and breakfast
9:00 AM – 9:30 AM | Opening remarks and director’s address
9:30 AM – 11:00 AM | New frontiers in AI governance
11:00 AM – 11:20 AM | Break
11:20 AM – 12:30 PM | Morning keynote - The slow death of scaling: what it means for policy controlling compute
12:30 PM – 1:30 PM | Lunch
1:30 PM – 3:00 PM | Navigating autonomy and accountability in AI agents
3:00 PM – 3:20 PM | Break
3:20 PM – 4:30 PM | Afternoon keynote - Gradual disempowerment: Systemic existential risks from incremental AI development
4:30 PM – 4:40 PM | Closing remarks
4:40 PM – 5:30 PM | Reception
Venue
Absolutely Interdisciplinary
Schwartz Reisman Innovation Campus, University of Toronto
108 College St (second floor), Toronto, Ontario, M5G 0C6
Sessions | Absolutely Interdisciplinary
Absolutely Interdisciplinary will take place in person at the Schwartz Reisman Innovation Campus at the University of Toronto.
Morning keynote - The slow death of scaling: what it means for policy controlling compute
May 29, 2025 | 11:20 AM – 12:30 PM
Speaker: Sara Hooker (Virtual)
Moderator: Roger Grosse
In this talk, we will understand how the relationship between compute and performance is changing. To do so, we need to engage with a decades-old debate at the heart of computer science progress, namely, is bigger always better? Does a certain inflection point of compute result in changes to the risk profile of a model? This discussion is timely given the wide adoption of compute thresholds and chip bans to identify more risky systems and prevent misuse. A key conclusion of this essay is that the relationship between compute and risk is highly uncertain and rapidly changing. Relying upon access to compute alone overestimates our ability to predict model risk, and is likely to have limited success as a policy measure. This talk also prompts a wider reflection on how do we make sure policies are guided by scientific evidence.
New frontiers in AI governance
May 29, 2025 | 9:30 AM – 11:00 AM
Speakers: Atoosa Kasirzadeh, Nitarshan Rajkumar
Moderator: Karina Vold
Governing the most advanced ‘frontier’ AI systems presents distinct challenges that go beyond general AI governance. This session will explore key governance considerations for highly capable AI systems, offering insights into the sociotechnical complexities of AI risk governance and emerging approaches to regulation. Kasirzadeh and Rajkumar will examine the political, social, and technical dimensions of frontier AI governance and discuss trade-offs in different strategies that balance innovation and risk mitigation.
Navigating autonomy and accountability in AI agents
May 29, 2025 | 1:30 PM – 3:00 PM
Speakers: Megan Ma (Virtual), Atrisha Sarkar
Moderator: Anna Su
Autonomous agents are shaping the future of AI development, raising legal and societal questions about accountability, governance, and trust. Who is responsible when an AI agent makes a harmful decision? How should laws evolve to regulate AI autonomy? This session will bring together experts in law, ethics, technology, and policy to explore the challenges of governing AI agents, from liability and bias to power dynamics and regulatory gaps. Using real-world scenarios and examples, Ma and Sarkar will examine how to balance innovation with accountability in an era of increasingly independent AI systems.
Afternoon Keynote - Gradual disempowerment: Systemic existential risks from incremental AI development
May 29, 2025 | 3:20 PM – 4:30 PM
Speaker: David Duvenaud
Moderator: Sheila McIlraith
This talk will explore the systemic risks posed by incremental advancements in artificial intelligence, introducing the concept of gradual disempowerment as an alternative to the abrupt takeover scenarios often discussed in AI safety. Drawing on recent research, Duvenaud will examine how even incremental improvements in AI capabilities can erode human influence over critical societal systems, including the economy, culture, and governance.
As AI increasingly supplants human labor and cognition, it may weaken both explicit control mechanisms—such as voting and market choices—and the implicit human alignments embedded in societal structures that historically relied on human participation. Furthermore, if these systems incentivize outcomes misaligned with human values, AI-driven optimization could exacerbate those misalignments. These effects may reinforce one another across domains, as economic power influences cultural narratives and political decision-making, while cultural shifts, in turn, shape economic and political behavior.
This session will explore whether such dynamics could lead to an effectively irreversible decline in human agency over key societal functions, heightening the risk of existential catastrophe through long-term disempowerment. The discussion will highlight the need for both technical research and governance strategies to counteract the incremental erosion of human influence and ensure that AI-driven systems remain aligned with human interests.