SPEAKERS:
Absolutely Interdisciplinary 2021

Jesse Clifton

Jesse Clifton is a researcher at the Center on Long-Term Risk (formerly the Effective Altruism Foundation). He is also the interim director of the newly-founded Cooperative AI Foundation. His main interest is in approaches to improving the cooperative intelligence of advanced AI systems, using tools from game theory and machine learning. His background is in statistics, specializing in reinforcement learning.

➦ Session: Introducing the Cooperative AI Foundation and the Collective Intelligence Journal


Jeff Clune

Jeff Clune

Jeff Clune is a research team leader at OpenAI and an associate professor of computer science at the University of British Columbia.

Previously, Clune was a senior research manager and founding member of Uber AI Labs, which was formed after Uber acquired their startup. Prior to Uber, he was the Loy and Edith Harris Associate Professor in Computer Science at the University of Wyoming.

Clune conducts research in three related areas of machine learning (and combinations thereof):

  • Deep learning: Improving our understanding of deep neural networks, harnessing them in novel applications, and advancing deep reinforcement learning.

  • Evolving neural networks: Investigating open questions in evolutionary biology regarding how intelligence evolved and harnessing those discoveries to improve our ability to evolve more complex, intelligent neural networks.

  • Robotics: Making robots more like animals in being adaptable and resilient.

A good way to learn about Clune’s research is by visiting his Google Scholar page, which lists all of his publications. You can also view videos about his work.

Session: Social Organisms and Social AI


conitzer_gray.png

Vincent Conitzer

Vincent Conitzer is the Kimberly J. Jenkins Distinguished University Professor of New Technologies and professor of computer science, professor of economics, and professor of philosophy at Duke University. He is also head of technical AI engagement at the Institute for Ethics in AI, and professor of computer science and philosophy, at the University of Oxford. He received PhD (2006) and MS (2003) degrees in computer science from Carnegie Mellon University, and an AB (2001) degree in applied mathematics from Harvard University.

Conitzer works on artificial intelligence (AI). Much of his work has focused on AI and game theory, for example designing algorithms for the optimal strategic placement of defensive resources. More recently, he has started to work on AI and ethics: how should we determine the objectives that AI systems pursue, when these objectives have complex effects on various stakeholders?

Sessions: Computational Ethics, Introducing the Cooperative AI Foundation and the Collective Intelligence Journal


Ruairí Donnelly

Ruairí Donnelly is a philanthropist and investor. He currently serves as the President of the Center for Emerging Risk Research, where he provides guidance on strategic direction and grantmaking. His grantmaking interests are focussed on the long term impacts of AI. 

Previously, he was Chief of Staff at Alameda Research and FTX, two leading companies in the cryptocurrency industry.

➦ Session: Introducing the Cooperative AI Foundation and the Collective Intelligence Journal


Jessica Flack

Jessica Flack  is a professor at the Santa Fe Institute, director of SFI's Collective Computation Group, a chief editor of the new transdisciplinary journal, Collective Intelligence, and, previously, was founding director of University of Wisconsin-Madison's Center for Complexity and Collective Computation in the Wisconsin Institutes for Discovery. Flack is interested in the roles of information processing and collective computation in the emergence of robust but evolvable structure and function in biological and social systems. Goals include identifying the computational principles that allow nature to overcome subjectivity due to information processing to produce ordered states and accounting for the origins of space and time in biological systems and a central idea is noisy information processors reduce uncertainty about the future by computing their macroscopic worlds through collective coarse-graining. Flack's work has been covered in many publications and media outlets, including the BBC, NPRNature, Science, The Economist, New ScientistCurrent Biology, The Atlantic, and Quanta Magazine. Flack also writes popular science articles on collective behavior and complexity science for magazines like Aeon and will begin writing a regular column for Quanta in 2022. In 2020 the Information Theory of Individuality—the end product of a ten year collaboration with David Krakauer and Nihat Ay—was chosen as a science breakthrough of the year by Quanta Magazine. 

➦ Session: Introducing the Cooperative AI Foundation and the Collective Intelligence Journal


gordon_gray_small.png

Deborah M. Gordon

Deborah M. Gordon is a professor in the Department of Biology at Stanford University. She studies how ant colonies work without central control using networks of simple interactions, and how these networks evolve in relation to changing environments.

Gordon received her PhD from Duke University, then joined the Harvard Society of Fellows, and did postdoctoral research at Oxford and the University of London before joining the Stanford faculty in 1991. Projects include a long-term study of a population of harvester ant colonies in Arizona, studies of the invasive Argentine ant in northern California, arboreal ant trail networks, and ant-plant mutualisms in Central America.

Session: Social Organisms and Social AI


2.png

Moritz Hardt

Moritz Hardt is an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. His research engages with the consequential interplay of algorithms, data, and society. Hardt obtained a PhD in Computer Science from Princeton University with a dissertation on privacy-preserving data analysis and fairness in classification. He then held research positions at IBM Research and Google. Hardt co-founded the Workshop on Fairness, Accountability, and Transparency in Machine Learning. He is a co-author of "Fairness and Machine Learning: Limitations and Opportunities" and "Patterns, Predictions, and Actions: A Story about Machine Learning". He has received an NSF CAREER award, a Sloan fellowship, and best paper awards at ICML 2018 and ICLR 2017.

Session: Fairness in Machine Learning


hadfield.png

Gillian K. Hadfield

Gillian K. Hadfield is the director of the Schwartz Reisman Institute for Technology and Society. She is the Schwartz Reisman Chair in Technology and Society, professor of law and of strategic management at the University of Toronto, a faculty affiliate at the Vector Institute for Artificial Intelligence, and a senior policy advisor at OpenAI. Her current research is focused on innovative design for legal and regulatory systems for AI and other complex global technologies; computational models of human normative systems; and working with machine learning researchers to build ML systems that understand and respond to human norms.

➦ Sessions: Opening talk, Cooperative Intelligence, Introducing the Cooperative AI Foundation and the Collective Intelligence Journal


Eric Horvitz 

Eric Horvitz serves as Microsoft’s Chief Scientific Officer. Dr. Horvitz provides cross-company leadership and perspectives on advances and trends on scientific matters, and on issues and opportunities rising at the intersection of technology, people, and society. He is recognized for his research on challenges and opportunities with uses of AI technologies amidst the complexities of the open world. Dr. Horvitz is the recipient of the Feigenbaum Prize and the Allen Newell Prize for contributions to AI. He serves on the President’s Council of Advisors on Science and Technology (PCAST) serving as an advisor to U.S. leadership on matters involving science, technology, education, and innovation policy. 

➦ Session: Introducing the Cooperative AI Foundation and the Collective Intelligence Journal


4.png

Joel Z. Leibo

Joel Z. Leibo is a research scientist at DeepMind and a research affiliate with the McGovern Institute for Brain Research at MIT.

His research is aimed at the following questions:

  • How can we get deep reinforcement learning agents to perform complex cognitive behaviors like cooperating with one another in groups?

  • How should we evaluate the performance of deep reinforcement learning agents?

  • How can we model processes like cumulative culture that gave rise to unique aspects of human intelligence?

Session: Cooperative Intelligence


Sarah Mathew

Sarah Mathew is associate professor at the School of Human Evolution and Social Change at Arizona State University. Mathew investigates why humans, unlike other animals, cooperate in groups comprising large numbers of genetically unrelated individuals, and how the evolution of this unique form of cooperation is tied to the origins of moral sentiments, cultural norms, and warfare.

Mathew’s research combines formal modelling of the evolution of cooperation with fieldwork to test theories of how cooperation is sustained. Mathew has been running a field project in Kenya among the Turkana, a politically decentralized pastoral society with limited market integration, examining how and at what scale people cooperate when centralized political institutions are absent. Her findings have pointed to the foundational role of informal cultural norms, and thereby the human capacity for cultural transmission, in the emergence of human cooperation.

Session: Cooperative Intelligence


Deirdre Mulligan

Deirdre K. Mulligan is a professor in the School of Information at UC Berkeley, a faculty director of the Berkeley Center for Law & Technology, a co-organizer of the Algorithmic Fairness & Opacity Working Group, an affiliated faculty on the Hewlett funded Berkeley Center for Long-Term Cybersecurity, and a faculty advisor to the Center for Technology, Society & Policy.

Mulligan’s research explores legal and technical means of protecting values such as privacy, freedom of expression, and fairness in emerging technical systems.  Her book, Privacy on the Ground: Driving Corporate Behavior in the United States and Europe, a study of privacy practices in large corporations in five countries, conducted with UC Berkeley’s Kenneth Bamberger was recently published by MIT Press. Mulligan and Bamberger received the 2016 International Association of Privacy Professionals Leadership Award for their research contributions to the field of privacy protection.

Session: Fairness in Machine Learning


David Rokeby

David Rokeby is an artist who works with a variety of digital media to critically explore the impacts these media are having on contemporary human lives. David Rokeby’s early work Very Nervous System (1982-1991) was a pioneering work of interactive art, translating physical gestures into real-time interactive sound environments. He has exhibited and lectured extensively internationally and has received numerous international awards including a Governor General’s Award in Visual and Media Arts (2002), a Prix Ars Electronica Golden Nica for Interactive Art (2002), and a British Academy of Film and Television Arts “BAFTA” award in Interactive art (2000). Rokeby is the Director of the University of Toronto’s BMO Lab.

Session: Special Event: Fresh Bard


Johanna Thoma

Johanna Thoma is a philosopher working at the intersection of philosophy, economics, and public policy. More specifically, she thinks and writes about practical rationality and decision theory, ethics, public policy evaluation, economic methodology, and the application of economic methods to philosophical problems. What unites most of her work to date is a desire to better understand what morality and rationality require of us when we don’t know what the future may hold. Recently, she has thought a lot about whether ordinary people and policy-makers faced with uncertainty are required to follow some variant of expected utility theory, what that even means in the first place, what the alternatives may be, and how plausible these are. What’s at stake in this debate, in practical terms, is how much room we can make for genuine risk aversion and precaution in public policy and in our ordinary lives. Thoma is an associate professor at the Department of Philosophy, Logic and Scientific Method at the London School of Economics.

Session: Computational Ethics