Signed in as:
filler@godaddy.com
18-19 May, 2026
Harnack House
Berlin
Researchers of Behavioral Science, Complexity Science, and Cultural Evolution
As the rise of AGI and large language models like GPT-4 reshape our world, we're faced with pressing questions: How do we ensure behavioral AI safety? What does the future of work and human communication look like? And how can we guide cultural shifts in this new age? This event is not just a meeting of minds but a call to action. We aim to collaboratively chart research priorities, foster community ties among machine behavior enthusiasts, and lay the groundwork for future engagements.
Andrea Baronchelli is a Professor of Complexity Science at City, University of London, and a Research Associate at the UCL Centre for Blockchain Technologies.
Emilio Calvano is a full professor of Economics at the University of Rome – Tor Vergata, associate faculty at the Toulouse School of Economics and research fellow of the Centre for Economic Policy Research in London (CEPR). He is an applied theoretical economist whose research mainly focuses on the theory of Industrial Organization.
Kinga Makovi is an Assistant Professor at New York University Abu Dhabi. She is a co-PI of the Center for Interacting Urban Networks at NYUAD, CITIES. Prior to joining NYUAD, she earned a PhD in Sociology from Columbia University and a MS in Mathematical Economics from Corvinus University of Budapest.
Associate Professor Brian Earp, PhD, is director of the Oxford-NUS Centre for Neuroethics and Society (OCNS) and the EARP Lab (Experimental Bioethics, Artificial Intelligence, and Relational Moral Psychology Lab) within the Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore (NUS). Brian is also an Associate Professor of Philosophy and of Psychology at NUS by courtesy.
Kate Devlin is Professor of AI & Society in the Department of Digital Humanities, King's College London and is the current Chair-Director of the Digital Futures Institute. With an undergraduate degree in archaeology (QUB) and an MSc (QUB) and PhD (Bristol) in computer science, her research investigates how – and why – people interact with and react to technologies, both past and future.
Laura Weidinger is a researcher at DeepMind, working on ethical and social risks of harm from Language Models. Her background includes work on how humans make decisions, and how this process compares to decision-making in machine learning systems.