
SPEAKERS
explore our line-up of 2025 speakers.
David Duvenaud
Associate Professor, Department of Computer Science, Canada CIFAR AI Chair & Founding Member, Vector Institute; Schwartz Reisman Chair in Technology and Society
David Duvenaud is a leading researcher in AI safety and associate professor in the Department of Computer Science at the University of Toronto. His research has had a major impact in the field of probabilistic deep learning, through which he now focuses on artificial general intelligence governance and dangerous capabilities evaluation. Duvenaud is serving a five-year appointment as Schwartz Reisman Chair in Technology and Society until 2029, and currently works on the alignment science team at Anthropic.
➦Afternoon Keynote: Gradual disempowerment: Systemic existential risks from incremental AI development
Moderator: Sheila McIlraith
Roger GrossE
Associate Professor, Department of Computer Science, Canada CIFAR AI Chair & Founding Member, Vector Institute; Schwartz Reisman Chair in Technology and Society
Roger Grosse is a renowned expert in AI safety and associate professor in the Department of Computer Science at the University of Toronto. His research applies understandings of deep learning to the safety and alignment of AI systems in order to understand and mitigate the risks posed by them and to determine how they can be safely and ethically integrated for the long-term benefit of humanity. Grosse is serving a five-year appointment as Schwartz Reisman Chair in Technology and Society until 2029, and is currently a member of technical staff on the alignment team at Anthropic through the summer of 2024.
➦ Moderating Session: Morning keynote
Sara Hooker (Virtual)
Head of Cohere For AI, VP Research, Cohere
Sara Hooker is a leading expert in machine learning (ML) research who specializes in developing more efficient, safe, and grounded AI models, with a strong focus on fundamental research, trustworthy AI, and real-world applications. With experience spanning industry, open research initiatives, and AI ethics, Hooker brings a deep understanding of the challenges and opportunities shaping the field and will deliver a keynote lecture on her recent work. Hooker leads Cohere For AI, a non-profit research lab that seeks to solve complex ML problems and creating more points of entry into ML research.
➦ Morning Keynote: The slow death of scaling: what it means for policy controlling compute.
David Lie
Professor, Department of Electrical and Computer Engineering, University of Toronto; Director, Schwartz Reisman Institute for Technology and Society
David Lie is a world-leading computer security expert who is known for his seminal work that led to modern trusted execution processor architectures. As professor at the Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE) at the University of Toronto, Lie’s research goal is to ensure that the computing infrastructure our societies rely on heavily be secure, reliable, and trustworthy—especially at a crucial time when computer systems increasingly permeate our lives. Currently leading as director, Lie was one of the Schwartz Reisman Institute for Technology and Society’s inaugural research leads when the institute was established in 2019, and has worked on a number of interdisciplinary research projects at the intersection of computing, policy, law, and the use, stewardship, and governance of data.
➦ Session: Opening remarks and director’s address
Event MC
Atoosa Kasirzadeh
Assistant Professor, AI Governance, Ethics, and Safety, Carnegie Mellon University
Atoosa Kasirzadeh is a philosopher and AI researcher with a track record of publications on ethics and governance of AI and computing. Atoosa is a 2024 Schmidt Sciences AI2050 Early Career Fellow and a Steering Committee Member for the ACM FAccT conference. In December 2024, she joined Carnegie Mellon University as a tenure track Assistant Professor with joint affiliations in the Philosophy and Software & Societal Systems departments. During 2025-2027, Atoosa is the council member of the World Economic Forum’s Global Future Council on Artificial General Intelligence. Previously, she was a visiting faculty at Google Research, a Chancellor’s Fellow and Research Lead at the University of Edinburgh’s Centre for Technomoral Futures, a Group Research Lead at the Alan Turing Institute, a DCMS/UKRI Senior Policy Fellow, and a Governance of AI Fellow at Oxford. Atoosa holds two doctoral degrees: a Ph.D. in Philosophy of Science and Technology from the University of Toronto and a Ph.D. in Mathematics (Operations Research) from the École Polytechnique de Montréal. She holds a B.Sc. and M.Sc. in Systems Engineering. Her research combines quantitative, qualitative, and philosophical methods to explore questions about the societal impacts, governance, and future of AI and humanity. Atoosa’s publications can be found here. Her work has been featured in major media outlets including The Wall Street Journal, The Atlantic, and TechCrunch. Atoosa frequently advises public and private institutions on responsible AI development and AI governance.
➦ Session: New frontiers in AI governance
Moderator: Karina Vold
Megan Ma (Virtual)
Executive Director, Stanford Legal Innovation through Frontier Technology Lab (liftlab)
Megan Ma is the Executive Director of the Stanford Legal Innovation through Frontier Technology Lab (liftlab). Her research focuses on the use and integration of generative AI in legal applications and the translation of legal knowledge to code, considering their implications in contexts of human-machine collaboration. Her particular focus is on questions of legal education and the future of practice in light of new technological developments. She also teaches courses in computational law at the Law School and Department of Computer Science.
Dr. Ma is also currently an Advisor to the PearX for AI program, Editor-in-Chief for the Cambridge Forum on AI, Law, and Governance, the Managing Editor of the MIT Computational Law Report, and a Senior Research Affiliate at Singapore Management University in their Centre for Digital Law. Megan received her PhD in Law at Sciences Po and was a lecturer there, having taught courses in Artificial Intelligence and Legal Reasoning, Legal Semantics, and Public Health Law and Policy. She has previously been a Visiting PhD at the University of Cambridge and Harvard Law School respectively.
➦ Session: Navigating autonomy and accountability in AI agents
Moderator: Anna Su
Sheila McIlraith
Professor in the Department of Computer Science, University of Toronto; Canada CIFAR AI Chair, Vector Institute for Artificial Intelligence; Associate Director, Schwartz Reisman Institute for Technology and Society
Sheila McIlraith is a Professor in the Department of Computer Science, University of Toronto, a CIFAR AI Chair (Vector Institute for Artificial Intelligence), and Associate Director and Research Lead of the Schwartz Reisman Institute for Technology and Society (SRI). McIlraith’s research focuses on AI sequential decision making broadly construed, through the lens of human-compatible AI. McIlraith is a fellow of the ACM, a fellow of the Association for the Advancement of Artificial Intelligence (AAAI). She is currently serving as Chair-Elect of the Standing Committee of the One Hundred Year Study on Artificial Intelligence(AI100) at Stanford. McIlraith has served on the editorial boards of JAIR, AIJ, and Artificial Intelligence magazine. She has served as program co-chair of AAAI-2018, KR2012, and ISWC2004, and conference co-chair of ICAPS2024. McIlraith’s research, much of it with her students, has been honoured with a number of paper awards, including three test-of-time style awards. McIlraith initiated and coleads the University of Toronto Embedded Ethics Education Initiative (E3I) an award-winning initiative that endows computer science students with the knowledge and skills to contemplate ethical considerations in the design and deployment of technology.
Moderating Keynote: Gradual disempowerment: Systemic existential risks from incremental AI development
Nitarshan Rajkumar
Vice-Chair, General-Purpose AI Code of Practice, European Commission; Visiting Fellow, Oxford Martin AI Governance Initiative
Nitarshan Rajkumar is a visiting fellow at the Oxford Martin AI Governance Initiative at the University of Oxford, a PhD candidate researching AI at the University of Cambridge, and a Vice-Chair leading the drafting of the EU’s General-Purpose AI Code of Practice. He was previously Senior Policy Adviser to the UK Secretary of State for Science, Innovation and Technology, a role in which he co-founded the AI Security Institute and co-created the AI Safety Summit and the UK's supercomputing programme. Prior to his time in public service, he was an AI researcher at Mila in Montréal, and a software engineer at startups in San Francisco.
➦ Session: New frontiers in AI governance
Moderator: Karina Vold
Atrisha Sarkar
Assistant Professor, Department of Electrical and Computer Engineering, Western University; Faculty Affiliate, Schwartz Reisman Institute for Technology and Society.
Atrisha Sarkar is an Assistant Professor in Electrical and Computer Engineering at Western University. Her research spans multiagent systems, behavioral game theory, and computational methods for ensuring the safety of human-centric AI at both individual and societal levels. Her interdisciplinary work has addressed challenges in autonomous vehicle safety, online polarization, and human-AI interaction. She was part of the team behind one of the first self-driving cars on Canadian roads. Atrisha’s research is published across AI, robotics, software engineering, and institutional economics. She holds a Ph.D. in Computer Science from the University of Waterloo and was a postdoctoral fellow at the Schwartz Reisman Institute for Technology and Society, University of Toronto. Prior to academia, she spent eight years in industry, primarily at IBM software labs.
➦ Session: Navigating autonomy and accountability in AI agents
Moderator: Anna Su
Anna SU
Associate Professor, Faculty of Law, University of Toronto; Research Lead, Schwartz Reisman Institute for Technology and Society
Anna Su's primary areas of research include the law and history of international human rights law, comparative constitutional law, technology and international law, and law and religion. She is currently a research lead at the Schwartz Reisman Institute for Technology and Society. She is also a Nootbaar Institute Fellow on Law and Religion at Pepperdine University School of Law. Anna holds an SJD from Harvard Law School where her dissertation was awarded the John Laylin Prize for best paper in international law. She received her JD and AB degrees from the Ateneo de Manila University in the Philippines. Prior to coming to Toronto, she held a postdoctoral fellowship at the Baldy Center for Law and Social Policy based in SUNY Buffalo Law School, and a graduate fellowship in ethics with the Edmond J. Safra Center for Ethics at Harvard University. She worked as a law clerk for the Philippine Supreme Court and was a consultant to the Philippine government negotiating panel with the Moro Islamic Liberation Front. She holds a cross-appointment at U of T’s Department of History (by courtesy).
➦ Moderating Session: Navigating autonomy and accountability in AI agents
Karina Vold
Assistant Professor, Institute for the History and Philosophy of Science and Technology, University of Toronto; Research Lead, Schwartz Reisman Institute for Technology and Society
Karina Vold works at the intersection of the philosophy of cognitive science, the philosophy of technology, artificial intelligence (AI), and applied ethics. She is cross-appointed in the Institute for the History and Philosophy of Science and Technology (IHPST) and the Department of Philosophy at the University of Toronto. Before joining the IHPST, she worked as a postdoctoral research associate at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge.
In her recent work, Vold has written on the implications of theories of extended cognition, on responsible innovation in online therapy, and on the capabilities and risks of AI. Her current projects include researching the effects of AI on human cognition and autonomy, understanding the harms arising from targeted online “nudging,” evaluating arguments for existential threats from AI, and building frameworks for the ethical design of AI systems.
➦ Moderating Session: New frontiers in AI governance