SPEAKERS 2023

Syed Ishtiaque Ahmed

Syed Ishtiaque AHmed

Syed Ishtiaque Ahmed is an assistant professor of computer science at the University of Toronto, and the director of the Third Space research group. He is also a graduate faculty member of the School of Environment, a faculty fellow at the Schwartz Reisman Institute for Technology and Society, and a senior fellow at Massey College. He co-organizes the monthly Critical Computing Seminar at U of T, and co-steers U of T's SDG initiative. Ahmed’s research focuses on the design challenges around strengthening the voices of marginalized communities around the world. He has conducted ethnography and built technologies with many underprivileged communities in Bangladesh, India, Pakistan, Iran, Iraq, Turkey, China, Canada, and the US. Ahmed received his PhD and Masters from Cornell University, and his Bachelor from BUET in Bangladesh. He is a recipient of the International Fulbright Science and Technology Fellowship, Fulbright Centennial Fellowship, and Schwartz Reisman Institute Fellowship, among others. His research has been funded by all three branches of Canadian tri-council research (NSERC, CIHR, SSHRC), as well as NSF, NIH, Google, Microsoft, Facebook, Intel, Samsung, the World Bank, UNICEF, and UNDP, among others.

➦ Session: The limits of AI: Roundtable 1

 
Ashton Anderson

Ashton Anderson

Ashton Anderson is an assistant professor of computer science at University of Toronto and a research lead at the Schwartz Reisman Institute for Technology and Society. Anderson is broadly interested in research that bridges the gap between computer science and the social sciences. He holds a PhD in computer science from Stanford University. Anderson is a co-creator of the Maia Chess project.

➦ Session: Large language models

 
Blaise Agüera y Arcas

Blaise Agüera y Arcas

Blaise Agüera y Arcas is a VP and Fellow at Google Research. His focus is on augmentative, privacy-first, and collectively beneficial applications. Agüera y Arcas has been an active participant in cross-disciplinary dialogues about AI and ethics, fairness and bias, policy, and risk. Until 2014 he was a Distinguished Engineer at Microsoft. Outside the tech world, Agüera y Arcas has worked on computational humanities projects such as digital reconstruction of photographs. In 2008, he was awarded MIT’s TR35 Prize. In 2018 and 2019 he taught “Intelligent Machinery, Identity, and Ethics” at the University of Washington, placing computing and AI in a broader historical and philosophical context.

➦ Sessions: 2023 Keynote, Value alignment?

 
Reem Ayad

Reem Ayad

Reem Ayad is a PhD student in the University of Toronto’s Department of Psychology and a graduate fellow at the Schwartz Reisman Institute. Her research focus is moral judgment in the context of human-machine interaction, with a particular focus on virtual AI systems. Her current research seeks to understand whether nurturing feelings of “closeness” with virtual assistants influences our moral judgment of them.

➦ Session: The limits of AI: Roundtable 1

 

Lauren Bialystok

Lauren Bialystok is an associate professor at the Ontario Institute for Studies in Education in the Department of Social Justice Education, a faculty associate at the Anne Tanenbaum Centre for Jewish Studies and the Centre de Recherches en Education Franco-Ontarienne, and acting director of the Centre for Ethics at the University of Toronto. Her areas of expertise are ethics and education, identity, feminist philosophy, social and political philosophy, and women's health and sexuality. Bialystok works with students whose areas of inquiry include gender and queer theory, sex education, philosophy of education, and identity in education.

➦ Session: Large language models

 
William Cunningham

William Cunningham

William Cunningham is a professor of psychology at the University of Toronto who studies the processes underlying social cognition and emotion. Cunningham’s research examines how social roles shape one’s self concept and group identity, how basic cognitive and motivational biases underpin the development and maintenance of stereotypes, and how social pressures help promote prosocial behavior. To understand these processes, his lab uses methods and theories from psychology (e.g., models of attitudes and latency-based evaluation measures) and cognitive science (e.g., biological models of emotion, fMRI/EEG methods, computational modelling).

➦ Session: Testing social cognitive theory with AI

 
Polly Denny

Polly Denny

Performing on stages such as the Royal Albert Hall and with partners such as the BBC, Polly Denny is a poet, performer, and facilitator working to explore emotion, confidence, and imagination. Alternating from humor and wit to sincere and moving, her work explores a wide range of ideas with a particular interest in the nature of creativity and its impact on human expression and our connection to the world and each other. Denny has worked as the Young Poet Laureate for Bath, is an alumni of Roundhouse’s Words First Programme, and is a UK National Slam Champion. She is currently exploring the intersection of creativity and artificial intelligence, and asking what this means for the imaginative mind.

➦ Session: AI and creativity

 
Avi Goldfarb

Avi Goldfarb

Avi Goldfarb is the Rotman Chair in Artificial Intelligence and Healthcare and a professor of marketing at the Rotman School of Management, University of Toronto. He is also chief data scientist at the Creative Destruction Lab, a research lead at the Schwartz Reisman Institute for Technology and Society, a faculty affiliate at the Vector Institute, and a research associate at the National Bureau of Economic Research. Goldfarb’s research focuses on the economic effects of information technology. He is co-author of Prediction Machines: The Simple Economics of Artificial Intelligence (2018) and Power and Prediction: The Disruptive Economics of Artificial Intelligence (2022) with Ajay Agrawal and Joshua Gans. He has published academic articles in marketing, statistics, law, management, medicine, political science, refugee studies, physics, computing, and economics. 

➦ Session: Machine learning in the workplace

 
Paolo Granata

Paolo Granata

Paolo Granata is an associate professor in Book and Media Studies at St. Michael’s College in the University of Toronto where his research and teaching interests lie broadly in the areas of media ecology, media ethics, semiotics, print culture, and visual studies. In 2019, he founded the Media Ethics Lab with the mission of studying and protecting human rights in the digital sphere. In addition to his research and teaching, Granata is also a cultural strategist, curator, advocate of sustainable development, and regular commenter and speaker on the future of education, including in his work with UNESCO and the City of Toronto. Granata has authored four books and more than 50 publications in Italian, English, French, and Spanish.

➦ Session: Large language models

 
Julia Haas

Julia Haas

Julia Haas is a senior research scientist in the Ethics Research Team at DeepMind. Haas was previously an assistant professor in the Department of Philosophy and the Neuroscience Program at Rhodes College and an affiliated researcher with ANU's Humanizing Machine Intelligence Grand Challenge. She was also a research fellow in the School of Philosophy at the Australian National University and a McDonnell Postdoctoral Research Fellow in the Philosophy Neuroscience Psychology program at Washington University in St. Louis. Haas’s research is in the philosophy of cognitive science and neuroscience. She works on the nature of valuation and its role in theories of the mind. Her current work includes investigating the possibility of meaningfully moral artificial intelligence.

➦ Session: The reward hypothesis

 
Gillian Hadfield

Gillian Hadfield

Gillian K. Hadfield is a professor of law and strategic management at the University of Toronto, the inaugural Director of the Schwartz Reisman Institute for Technology and Society, and is a CIFAR AI Chair at the Vector Institute. Hadfield’s research is focused on innovative design for legal and regulatory systems for AI and other complex global technologies; computational models of human normative systems; and working with machine learning researchers to build ML systems that understand and respond to human norms. Hadfield is a faculty affiliate at the Center for Human-Compatible AI at the University of California Berkeley, and a senior policy advisor at OpenAI. Her book Rules for a Flat World: Why Humans Invented Law and How to Reinvent It for a Complex Global Economy was published by Oxford University Press in 2017; a paperback edition with a new prologue on AI was published in 2020 and an audiobook version released in 2021.

➦ Session: The reward hypothesis

 
N. Katherine Hayles

N. Katherine Hayles

N. Katherine Hayles teaches and writes on the relations of literature, science, and technology in the 20th and 21st centuries. Her most recent book, Postprint: Books and Becoming Computational, was published by the Columbia University Press (Spring 2021). Among her dozen books are How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics, which won the Rene Wellek Prize for the Best Book in Literary Theory for 1998-99, and Writing Machines, which won the Suzanne Langer Award for Outstanding Scholarship.  She is a member of the American Academy of Arts and Science.  In addition, she is Distinguished Research Professor of English at the University of California, Los Angeles and the James B. Duke Professor of Literature Emerita at Duke University.

➦ Session: AI and creativity

 
Cendri Hutcherson

Cendri Hutcherson

Cendri Hutcherson is the director of the Toronto Decision Neuroscience Laboratory and an associate professor of Psychology at the University of Toronto, with a cross-appointment to the Rotman School of Management. She received degrees in psychology from Harvard (BA) and Stanford (PhD), and worked as a post-doctoral scholar studying neuroeconomics at the California Institute of Technology. Her research program applies computational modeling to behavior, eye tracking, EEG, and fMRI data, with the goal of understanding how we make decisions and why we sometimes make decisions we later regret.

➦ Session: The limits of AI: Roundtable 1

 
Ganaele Langlois

Ganaele Langlois

Ganaele Langlois is associate professor in communication studies at York University, and associate director of the Infoscape Centre for the Study of Social Media. Her research interests lie in media theory and critical theory, particularly with regards to the shaping of subjectivity and agency through and with media technologies. She is the author of Meaning in the Age of Social Media (Palgrave, 2014), and co-editor of Compromised Data? From Social Media to Big Data (Bloomsbury, 2015) and a series of special issues on the Canadian alt-right for the Canadian Journal of Communication (2021–22). Langlois is a co-principal investigator on the SSHRC-funded "Beyond Verification" and Mellon-funded "Data Fluencies" projects which explore mis- and dis-information. She is currently working on a research project about textile as communication, for which she received a SSHRC Insight Development Grant and Ontario Arts Council Grant. Her research has been published in New Media and Society, Culture Machine, Communication and Critical-Cultural Studies, Television and New Media, and Fibreculture.

➦ Session: The limits of AI: Roundtable 2

 
Joel Z. Leibo

Joel Leibo

Joel Z. Leibo is a senior staff research scientist at Google DeepMind. His research is concerned with studying cooperation in both humans and machines, and with evaluating AI capabilities. He is interested in reverse engineering human biological and cultural evolution to inform the development of multi-agent artificial intelligence that is simultaneously human-like and human-compatible.

➦ Session: Testing social cognitive theory with AI

 
Nicolas Papernot

Nicolas Papernot

Nicolas Papernot is an assistant professor in the Department of Electrical and Computer Engineering and the Department of Computer Science at the University of Toronto. He is also a faculty member at the Vector Institute where he holds a Canada CIFAR AI Chair, and a faculty affiliate at the Schwartz Reisman Institute for Technology and Society. Papernot’s research interests are at the intersection of security, privacy, and machine learning. He earned his PhD in computer science and engineering at Pennsylvania State University, working with Patrick McDaniel and supported by a Google PhD Fellowship. In 2022, he was named an Alfred P. Sloan Research Fellow in Computer Science.

➦ Session: The limits of AI: Roundtable 2, Testing social cognitive theory with AI

 
Ceceilia Parnther

Ceceilia Parnther

Ceceilia Parnther is an assistant professor and program coordinator in the Department of Administrative and Instructional Leadership at St. John’s University. Her research focuses on equity in postsecondary student success. In addition to this, Parnther has held academic support roles for over 15 years at a variety of institutions. Through mentoring, academic integrity, and leadership, Parnther uses qualitatively dominant, mixed methods approaches to describe and explore our understanding of student support. Parnther is an Editor of the SAGE Student Success resource, and on the editorial board for the Ethics and Integrity in Educational Contexts book series.

➦ Session: Large language models

 
Mohammad Rashidujjaman Rifat

Mohammad Rashidujjaman Rifat

Mohammad Rashidujjaman Rifat is a PhD candidate in the Department of Computer Science and a Schwartz Reisman Institute graduate fellow at the University of Toronto. He is a member of the Dynamics Graphics Project (DGP) lab,  Third Space research group, and is supervised by Syed Ishtiaque Ahmed. In addition to Rifat’s PhD in computer science, he is doing a doctoral specialization in South Asian Studies from the Munk School of Global Affairs and Public Policy at the University of Toronto. Rifat’s research in human-computer interaction (HCI), computer-supported cooperative works and social computing (CSCW), and information and communication technologies for development (ICTD) is at the intersection of faith and computation. Through ethnographic, computational, and design research, he studies faith-based groups and institutions to explore how religious, spiritual, and traditional ethics and politics are excluded from computing technologies, and develop theories and design socio-technical systems where plural forms of values and ethics can coexist.

➦ Session: SRI Graduate Workshop

 
Jennifer Raso

Jennifer Raso

Jennifer Raso is an assistant professor at McGill University’s Faculty of Law whose research investigates the relationship between discretion, data-driven technologies, and administrative law. She is particularly intrigued by how humans and non-humans collaborate and diverge as they produce institutional decisions, and the consequences for procedural fairness and substantive justice. Raso is presently exploring these issues as the principal investigator on a SSHRC Insight Development Grant project, “Shifting Front Lines in the Digital Welfare State: Coding Canadian Social Assistance Laws.” Before joining McGill, Raso was a post-doctoral fellow at the University of New South Wales Faculty of Law, a visiting fellow at the Yale Law School Information Society Project, a visiting researcher at the University of California Berkeley Center for the Study of Law and Society, and a lawyer for the City of Toronto. An award-winning socio-legal scholar, Raso has received the Canadian Law and Society Association’s Best English-Language Article prize (2018), and the inaugural Richard Hart Prize at the University of Cambridge’s Public Law Conference (2016).

➦ Session: The limits of AI: Roundtable 2

 
Daniel Rock

Daniel Rock

Daniel Rock is an assistant professor in the Operations, Information, and Decisions Department at the Wharton School of the University of Pennsylvania. Rock’s research is on the economic effects of digital technologies, with a particular emphasis on AI. He has recently conducted studies on the impact of AI on the nature of work. His research has been published in various academic journals and featured in outlets such as The New York Times, Wall Street Journal, Bloomberg, and Harvard Business Review. Much of his work involves applying cutting-edge data science techniques to analyze datasets from financial market data sources, online resume sites, and job postings. 

➦ Session: Machine learning in the workplace

 
Frank Rudzicz

Frank Rudzicz

Frank Rudzicz is a scientist at the Li Ka Shing Knowledge Institute at St. Michael’s Hospital, director of artificial intelligence at Surgical Safety Technologies Inc., an associate professor in the Departments of Computer Science at Dalhousie University and the University of Toronto, co-founder of WinterLight Labs Inc., a faculty member at the Vector Institute for Artificial Intelligence, inaugural chair of the Standards Council of Canada’s subcommittee on artificial intelligence, and a CIFAR Chair in artificial intelligence. Rudzicz’s work is in machine learning in healthcare, especially in natural language processing, speech recognition, and surgical safety. His research has appeared in popular media and scientific press. He is the recipient of the Excellence in Applied Research Award from National Speech-Language & Audiology Canada and the Connaught Innovation Award. 

➦ Session: Machine learning in the workplace

 
Avery Slater

Avery Slater

Avery Slater is an assistant professor of English at the University of Toronto and a 2023–25 faculty fellow at the Schwartz Reisman Institute. Her research investigates the re-conceptualization of human and nonhuman forms of language following the rise of information and computational technologies. Slater has held fellowships at the University of Pennsylvania’s Penn Humanities Forum, the Society for the Humanities at Cornell, and the University of Texas at Austin. Her work has appeared in New Literary History, Cultural Critique, and Amodern, and in edited collections Saturation: An Elemental Politics (Duke UP) and The Oxford Handbook of Ethics of AI (Oxford UP).

➦ Session: AI and creativity

 
Richard S. Sutton

Richard Sutton

Richard S. Sutton is one of the pioneers of reinforcement learning, a field in which he continues to lead the world. He is most interested in understanding what it means to be intelligent, to predict and influence the world, to learn, perceive, act, and think. He seeks to identify general computational principles underlying what we mean by intelligence and goal-directed behaviour. Sutton currently seeks to extend reinforcement learning ideas to an empirically grounded approach to knowledge representation based on prediction. Sutton is chief scientific advisor, a fellow and Canada CIFAR AI Chair at Amii, a professor of computing science at the University of Alberta, and a distinguished research scientist at DeepMind. Sutton has been named a Fellow of the Royal Society of Canada, the Association for the Advancement of Artificial Intelligence (AAAI), and the Canadian Artificial Intelligence Association (CAIAC), where he also received a Lifetime Achievement Award in 2018. 

➦ Sessions: The reward hypothesis, Value alignment?

 
Yuxing Zhang

Yuxing Zhang

Yuxing Zhang is a PhD candidate at the Faculty of Information, University of Toronto and a graduate fellow at the Schwartz Reisman Institute for Technology and Society. Her research interests include critical data studies, media theory, platform studies, media infrastructure, ethics of AI, precision agriculture, space media, and knowledge politics. Her works have been published in Media, Culture & Society, Roadsides, and Canadian Journal of Communication. Her teaching focuses on power and information systems, and ethical issues in algorithmic technologies.

➦ Session: The limits of AI: Roundtable 2