SCHEDULE:
Absolutely Interdisciplinary 2021


10:00 - 10:05 AM ET: Opening remarks

10:05 - 11:05 AM: Identity in Utopia

11:30 AM - 12:30 PM: Data in Utopia

1:00 - 2:00 PM: Robots in Utopia


12:00 - 12:30 PM ET: Gillian Hadfield, “Human and Machine Normativity: New Connections”

12:30 - 1:45 PM: Social Organisms and Social AI

2:05 - 3:20 PM: Fairness in Machine Learning

3:30 PM: Special Event: Fresh Bard


Theme | Human and Machine Normativity: New Connections

Humans are a fundamentally normative species, with complex cognitive and social systems for shaping behaviour to implement collectively-determined values and norms to support cooperation. Norms in this sense refer not to what most people actually do, but rather to what people should do: the ubiquitous formal and informal prescriptive rules of behaviour—everything from the seemingly arbitrary, such as what clothing to wear to a funeral, to the clearly important, such as avoiding injury or harm to others.

Building AI systems that are robustly aligned with human values requires deep understandings of how these normative systems work. At the same time, advances in AI present unique opportunities to investigate and test what capacities contribute to our ability to build, maintain, and abide by norms. 

Absolutely Interdisciplinary sets out to foster the interdisciplinary conversations needed to map the connections between human and AI normativity. Participants will contribute to and learn about emerging areas of research and new questions to explore. Each session will pair researchers from different disciplines to address a common question, and then facilitate a group discussion. By identifying people working on similar questions from different perspectives, we will foster conversations that develop the interdisciplinary approaches and research questions needed to understand how AI can be made to align with the full range of diverse human normative systems.


Sessions | Absolutely Interdisciplinary 2021

Social organisms and social AIDate/Time: November 5th, 12:30 PM (EDT)Speakers: Jeff Clune, Deborah Gordon Moderator: Denis WalshWhat can AI models teach us about the evolution of complex organisms? Can evolutionary approaches provide the key to more powerful AI systems?Summary: Biological models have played a significant role in the development of artificial intelligence, including the concept of neural networks and genetic algorithms. At the same time, computer simulations can provide insight into questions in evolutionary biology. One striking commonality is the modular architectures of both evolving organisms and AI. New work in open-ended algorithms pushes these connections further to try to understand and algorithmically recreate the evolutionary conditions that gave rise to complex adaptive intelligences on earth. In particular, there are intriguing parallels between multi-agent AI and multi-agent evolution. How does the modularity of multi-agent systems relate to their evolutionary dynamics? Can approaches to training AI give us new insights into evolutionary processes? And do evolutionary approaches hold the key to creating powerful, perhaps even human level, AI?

Social organisms and social AI

Date/Time: November 5th, 2021 | 12:30 PM ET

Speakers: Jeff Clune, Deborah Gordon
Moderator: Denis Walsh

What can AI models teach us about the evolution of complex organisms? Can evolutionary approaches provide the key to more powerful AI systems?

Summary: Biological models have played a significant role in the development of artificial intelligence, including the concept of neural networks and genetic algorithms. At the same time, computer simulations can provide insight into questions in evolutionary biology. One striking commonality is the modular architectures of both evolving organisms and AI. New work in open-ended algorithms pushes these connections further to try to understand and algorithmically recreate the evolutionary conditions that gave rise to complex adaptive intelligences on earth. In particular, there are intriguing parallels between multi-agent AI and multi-agent evolution. How does the modularity of multi-agent systems relate to their evolutionary dynamics? Can approaches to training AI give us new insights into evolutionary processes? And do evolutionary approaches hold the key to creating powerful, perhaps even human level, AI?

Fairness in machine learningDate/Time: November 5th, 2:05 PM (EDT)Speakers: Moritz Hardt, Deirdre Mulligan Moderators: Toniann Pitassi, Shelia McIlraithHow do we develop methods to encompass broader notions of fairness, such as those found in law or philosophy? What are the major opportunities and obstacles in fairness research?Summary: Machine learning algorithms are becoming increasingly important as a tool for decision-making across a variety of high-stake, complex domains, raising concerns about the societal impact and potential bias of automated decision making. Building and auditing fair AI systems requires formal definitions of fairness, which can struggle to incorporate everyday conceptions of fairness and the complex legal, procedural, and institutional approaches societies take to fairness. How can we ensure that AI decision-making is fair? And when is the solution to issues of fairness social, rather than technological?

Fairness in machine learning

Date/Time: November 5th, 2021 | 2:05 PM ET

Speakers: Moritz Hardt, Deirdre Mulligan
Moderator: Toniann Pitassi

How do we develop methods to encompass broader notions of fairness, such as those found in law or philosophy? What are the major opportunities and obstacles in fairness research?

Summary: Machine learning algorithms are becoming increasingly important as a tool for decision-making across a variety of high-stake, complex domains, raising concerns about the societal impact and potential bias of automated decision making. Building and auditing fair AI systems requires formal definitions of fairness, which can struggle to incorporate everyday conceptions of fairness and the complex legal, procedural, and institutional approaches societies take to fairness. How can we ensure that AI decision-making is fair? And when is the solution to issues of fairness social, rather than technological?

Cooperative intelligenceDate/Time: November 6th, 12:30 PM (EDT)Speakers: Sarah Mathew, Joel Z. Leibo Moderator: Gillian HadfieldWhat are the minimal components that a system requires for the emergence of a human-like normative system?Summary: The fi…

Cooperative intelligence

Date/Time: November 6th, 2021 | 12:30 PM ET

Speakers: Sarah Mathew, Joel Z. Leibo
Moderator: Gillian Hadfield

What are the minimal components that a system requires for the emergence of a human-like normative system?

Summary: The field of AI has historically prioritized the challenges of individual intelligence: perception, reasoning, learning, and natural language. But humans are an inherently cooperative species, and the normative systems that enable this cooperation are likely a significant factor shaping human intelligence. Human normative systems are characterized by some behaviors becoming represented as rules, which are then enforced by sanctions towards norm violators. In addition, we have meta-norms about how and when to enforce sanctions on norm-violators. These meta norms allow us to converge on new norms more quickly and to converge on optimal behaviors in novel environments and solve cooperative action problems. What evolved characteristics of humans grant us the unique ability to construct these normative systems? And what are the minimal capacities an AI would need to share this capacity?

Computational ethicsDate/Time: November 6th, 2:05 PM (EDT)Speakers: Vincent Conitzer, Johanna Thoma Moderator: Jennifer NagelDoes AI create radically new problems in ethics? Summary: AI decision making is often treated as if it merely moves traditional ethical questions into AI systems. However, the need to mathematize ethical concepts in order to program them into algorithms, the scale of decisions that a single algorithm may make, and the way algorithms blur who is responsible for an outcome all have the potential to transform our ethical thinking. Can the complexities of moral reasoning be served by the power of contemporary AI? Does the increasing autonomy of AI systems create a need for a new kind of ethical thinking?

Computational ethics

Date/Time: November 6th, 2021 | 2:05 PM ET

Speakers: Vincent Conitzer, Johanna Thoma
Moderator: Jennifer Nagel

Does AI create radically new problems in ethics? 

Summary: AI decision making is often treated as if it merely moves traditional ethical questions into AI systems. However, the need to mathematize ethical concepts in order to program them into algorithms, the scale of decisions that a single algorithm may make, and the way algorithms blur who is responsible for an outcome all have the potential to transform our ethical thinking. Can the complexities of moral reasoning be served by the power of contemporary AI? Does the increasing autonomy of AI systems create a need for a new kind of ethical thinking?

 

Special Event: Fresh Bard

Date/Time: Friday, November 5th, 2021 | 3:30 PM ET

Presented by David Rokeby and the BMO Lab Performers-in-Residence

Summary: The Performers-In-Residence at the University of Toronto’s BMO Lab have been exploring ways of combining language models like GPT-2 and live performance. We have fine-tuned a version of GPT-2 on the works of William Shakespeare, and invite these highly trained actors to perform the output of the Shakespeare model live as it is generated… to spontaneously wrestle meaning out of the not-quite nonsense, and to embrace and highlight the role of human interpretation in our relationships with these systems. We are excited to present an impromptu live performance at the conference. We think the result will be both entertaining and raise new kinds of questions about the use and implications of these natural language generation networks.

Introducing the Cooperative AI Foundation and the Collective Intelligence Journal

Date/Time: Saturday, November 6th, 2021 | 12:00 PM ET

Speakers: Jessica Flack, Eric Horvitz, Ruairí Donnelly, Jesse Clifton, Vincent Conitzer
Moderator: Gillian Hadfield

Summary: In this panel, we'll hear from the founders of two new initiatives opening up funding and publication opportunities for interdisciplinary teams studying cooperation and collective intelligence.

 

Sessions | Graduate Workshop 2021: Views On Techno-Utopia

The 2020-2021 cohort of Schwartz Reisman Graduate Fellows presents Views on Techno-Utopia, a one-day, online, interdisciplinary workshop for early career scholars.

Views on Techno-Utopia will bring together early career scholars in the sciences, social sciences, and humanities to follow emerging technologies—particularly AI, platforms, and surveillance tech—through the lens of techno-utopianism. 

Techno-utopianism predicts that technologies can help us overcome human flaws and usher in a better world. What sorts of societal changes can emerging technologies actually effect? How might we tell which technologies promise more than they can deliver, or carry still-hidden risks? What forms of institutional or legal design make techno-utopianism—or critiquing its reach—possible? When faced with the detrimental effects of technological solutions to social problems, the response is often to “fix” the technology, leaving the underlying optimistic narrative untouched. How might technological “fixes” reproduce the moral systems from which they hope to save us? To what extent does the allure of the “fix” obscure techno-utopian assumptions, affect research and development, including your own?  

This workshop will be a place of interdisciplinary encounter, so presentations will strive to be accessible to people outside of their field. For instance, science-oriented scholars will consider the moral, philosophical, or social implications of their work. Analogously, humanists will ground their presentations in concrete understandings of present and future technological possibilities. 

AIXAI-gradpanel_1.jpg

Identity in Utopia

Date/Time:  November 4th, 2021 | 10:05 AM ET

Speakers:

Lilith Acadia, “Power on: The actually inhospitable smart homes of SF”

Rushay Naik, “Conflicts [and] interests: Implications of blockchain for delivering humanitarian healthcare in the fragile state”

AIXAI-gradpanel_2.jpg

Data in Utopia

Date/Time:  November 4th, 2021 | 11:30 AM ET

Speakers:

Jamie Duncan, “Renegotiating the terms of service: Data governance and digital citizenship in Canada”

Julian Posada, “The platform dystopia: Labour commodification in outsourced data production for machine learning”

AIXAI-gradpanel_3.jpg

Robots in Utopia

Date/Time:  November 4th, 2021 | 1:00 PM ET

Speakers:

Ke-Li Chiu, “Using large language models to detect hateful text contents”

Anne-Marie Fowler, “Better answers, or better questions: Is AI ethics about interruption?”