3 December 2024 In-person, BMA House, London Driving real-world impact from health research

David Leslie

Director of Ethics and Responsible Innovation Research, The Alan Turing Institute

David Leslie is the Director of Ethics and Responsible Innovation Research at The Alan Turing Institute. Before joining the Turing, he taught at Princeton’s University Center for Human Values, where he also participated in the UCHV’s 2017-2018 research collaboration with Princeton’s Center for Information Technology Policy on “Technology Ethics, Political Philosophy and Human Values: Ethical Dilemmas in AI Governance.” Prior to teaching at Princeton, David held academic appointments at Yale’s programme in Ethics, Politics and Economics and at Harvard’s Committee on Degrees in Social Studies, where he received over a dozen teaching awards including the 2014 Stanley Hoffman Prize for Teaching Excellence. He was also a 2017-2018 Mellon-Sawyer Fellow in Technology and the Humanities at Boston University and a 2018-2019 Fellow at MIT’s Dalai Lama Center for Ethics and Transformative Values. David has served as an elected member of the Bureau of the Council of Europe’s Ad Hoc Committee on Artificial Intelligence (CAHAI). He is on the editorial board of the Harvard Data Science Review (HDSR) and is a founding editor of the Springer journal, AI and Ethics. He is the author of the UK Government’s official guidance on the responsible design and implementation of AI systems in the public sector, Understanding artificial intelligence ethics and safety (2019) and a principal co-author of Explaining decisions made with AI (2020), a co-badged guidance on AI explainability published by the Information Commissioner’s Office and The Alan Turing Institute. He is also Principal Investigator of a UKRI-funded project called PATH-AI: Mapping an Intercultural Path to Privacy, Agency and Trust in Human-AI Ecosystems, which is a research collaboration with RIKEN, one of Japan’s National Research and Development Institutes founded in 1917. Most recently, he has received a series of grants from the Global Partnership on AI, the Engineering and Physical Sciences Research Council, and BEIS to lead a project titled, Advancing Data Justice Research and Practice, which explores how current discourses around the problem of data justice, and digital rights more generally, can be extended from the predominance of Western-centred and Global North standpoints to non-Western and intercultural perspectives alive to issues of structural inequality, coloniality, and discriminatory legacies. David was a Principal Investigator and lead co-author of the NESTA-funded Ethics review of machine learning in children’s social care (2020). His other recent publications include the HDSR articles “Tackling COVID-19 through responsible AI innovation: Five steps in the right direction” (2020) and “The arc of the data scientific universe” (2021) as well as Understanding bias in facial recognition technologies (2020), an explainer published to support a BBC investigative journalism piece that won the 2021 Royal Statistical Society Award for Excellence in Investigative Journalism. David is also a co-author of Mind the gap: how to fill the equality and AI accountability gap in an automated world (2020), the Final Report of the Institute for the Future of Work’s Equality Task Force and lead author of “Does AI stand for augmenting inequality in the COVID-19 era of healthcare” (2021) published in the British Medical Journal. He is additionally the lead author of Artificial intelligence, human rights, democracy, and the rule of law (2021), a primer prepared to support the CAHAI’s Feasibility Study and translated into Dutch and French, and of Human rights, democracy, and the rule of law assurance framework for AI systems: A proposal. In his shorter writings, David has explored subjects such as the life and work of Alan Turing, the Ofqual fiasco, the history of facial recognition systems and the conceptual foundations of AI for popular outlets from the BBC to Nature.