Beyond Compliance 2023 - Speakers

A detailed list of Speakers at the Beyond the Beyond Compliance 2023 is under construction on this page.

Links:

Home

 

 

Daniel Leufer

The EU’s AI Act: (self-)regulation, risk and public accountability

The EU is leading the charge on regulating AI. After years of ethics guidelines and self-regulation, it looks like we’ll finally have regulation to rein in the harms of AI systems. However, the regulation in question, the Artificial Intelligence Act, is essentially product safety regulation and will take a ‘risk-based approach’ to regulating AI, thus leaving significant gaps in terms of how it protects people’s rights. In this talk, I will discuss the shortcomings of the AI Act’s risk-based approach, how improvements are being introduced during negotiations, the gaps that will remain, and how it could impact people’s rights as well as ongoing research and development in the EU.

Daniel is a Senior Policy Analyst at Access Now’s Brussels office. His work focuses on the impact of emerging technologies on digital rights, with a particular focus on artificial intelligence (AI), facial recognition, biometrics, and augmented and virtual reality. While he was a Mozilla Fellow, he developed aimyths.org, a website that gathers resources to tackle myths and misconceptions about AI. He has a PhD in Philosophy from KU Leuven in Belgium and was previously a member of the Working Group on Philosophy of Technology at KU Leuven. He is also a member of the External Advisory Board of KU Leuven’s Digital Society Institute.


 

 

Katie Shilton

Excavating awareness and power for trustworthy data science

Researchers and data scientists using big, pervasive data about people face a significant challenge: navigating norms and practices for ethical and trustworthy data use. In response, the six-campus PERVADE project has conducted research with data scientists, data subjects, and regulators, and has discovered two entwined trust problems: participant unawareness of much research, and the relationship of social data research to corporate datafication and surveillance. In response, we have developed a decision support tool for researchers, inspired by research practices in a related but perhaps surprising research discipline: ethnography. This talk will introduce PERVADE’s research findings and the resulting decision support tool, and discuss ways that researchers working with pervasive data can incorporate reflection on awareness and power into their research.

Katie Shilton is an associate professor in the College of Information Studies at the University of Maryland, College Park. Her research focuses on technology and data ethics. She is a co-PI of the NSF Institute for Trustworthy Artificial Intelligence in Law & Society (TRAILS), and a co-PI of the Values-Centered Artificial Intelligence (VCAI) initiative at the University of Maryland. She was also recently the PI of the PERVADE project, a multi-campus collaboration focused on big data research ethics. Other projects include improving online content moderation with human-in-the-loop machine learning techniques; analyzing values in audiology technologies and treatment models; and designing experiential data ethics education. She is the founding co-director of the University of Maryland’s undergraduate major in social data science. Katie received a B.A. from Oberlin College, a Master of Library and Information Science from UCLA, and a Ph.D. in Information Studies from UCLA.


 

 

Sasha Luccioni

AI and Sustainability: Data, Models and (Broader) Impacts

AI models have a very tangible environmental impact in different stages of their life cycle, from manufacturing the hardware needed to train them to their deployment and usage. However, there is very little data about how their carbon footprint varies based on different architectures and sizes of models. In this talk, I will present work that was done as part of the Big Science project about measuring the carbon footprint of BLOOM, a 176-billion parameter open access language model, as well as complementary work on other models shared via the HuggingFace Hub.

Sasha Luccioni is a leading researcher in ethical artificial intelligence. Over the last decade, her work has paved the way to a better understanding of the societal and environmental impacts of AI technologies. She is a Research Scientist and Climate Lead at Hugging Face, a Board Member of Women in Machine Learning, and a founding member of Climate Change AI and she has been called upon by organizations such as the OECD, the United Nations and the NeurIPS conference as an expert in developing norms and best practices for a more sustainable and ethical practice of AI. Her academic research has been published in conferences and journals such as the IEEE, AAAI, the ACM, and JMLR.


 

 

Tony Ross-Hellauer

LLMs, reproducibility and trust in scholarly work

Recent advances in Large Language Models (LLMs) have captured the public attention and raised interest in the potential for such AI technologies to transform workflows across a range of areas, including scientific work. Rich experimentation is already underway to examine their potential in reshaping scientific tasks including information retrieval, data analysis, data synthesis, quality assurance and more. This comes, however, at a time when many disciplines are addressing what has been termed a “reproducibility crisis”, with systematic replication, prevalence of questionable research practices and lack of transparency casting doubt on the robustness of results. Touching on issues of reproducibility, bias and transparency in LLMs, and engaging with recent exploratory work on use-cases for LLMs in science, this talk will examine the potential for LLMs to both help and hinder improvement on reproducibility and trust in scholarly knowledge.

Tony Ross-Hellauer is a leading researcher in Scholarly Communication and Open Science. An information scientist with a background in philosophy, since 2019 he has led the Open and Reproducible Research Group at TU Graz and Know-Center, an interdisciplinary meta-research team investigating a range of issues related to Open Science evaluation, skills, policy, governance, monitoring and infrastructure. He is currently coordinator (PI) of the EC Horizon Europe-funded TIER2 project, investigating the role of epistemic diversity in reproducibility, and a member of the International Advisory Committee of the UK Reproducibility Network.


 

 

Brent Mittelstadt

A right to reasonable inferences in the age of AI

This talk examines the problematic status of inferred data in EU data protection law and proposes a way to fix it. A new data protection right, the ‘right to reasonable inferences’, is needed to help close the accountability gap currently posed by AI, machine learning, and other data intensive technologies that can draw invasive inferences from non-intuitive sources. Inferred data can severely damage privacy or reputation and shape the opportunities available to individuals and groups. Despite often being predictive or opinion-based, it nonetheless is increasingly used in important decisions. The proposed right would require ex-ante justification to be given by the data controller to establish whether an inference is reasonable. This disclosure would address (1) why certain data form a normatively acceptable basis from which to draw inferences; (2) why these inferences are relevant and normatively acceptable for the chosen processing purpose or type of automated decision; and (3) whether the data and methods used to draw the inferences are accurate and statistically reliable.

Professor Brent Mittelstadt is Director of Research and Associate Professor at the Oxford Internet Institute, University of Oxford. He leads the Governance of Emerging Technologies (GET) research programme which works across ethics, law, and emerging information technologies. Professor Mittelstadt is the author of foundational works addressing the ethics of algorithms, AI, and Big Data; fairness, accountability, and transparency in machine learning; data protection and non-discrimination law; group privacy; and ethical auditing of automated systems.


 

 

Marisa Ponti

In (Citizen) Science We Trust

Citizen science is increasingly recognized as advancing scientific research and social innovation by involving non-professional participants in gathering and analyzing empirical data.

Citizen science projects have been using technology such as online platforms, mobile apps, and sensors to make it easier for members of the public to participate and submit data. Now, the application of artificial intelligence (AI) technologies, including Machine Learning, computer vision, and Natural Language Processing, in citizen science is also growing rapidly. AI approaches are becoming more efficient and complex as they develop, and a number of fields, including astronomy, history and literature, environmental justice, ecology and biodiversity, biology, and neuroimaging, use them in varying combinations with citizen scientists. This can include collecting meaningful data from critical locations or sorting through large datasets to help solve problems ranging from the hyperlocal to the scale of the Universe. The trustworthiness of data collected by citizens has always been a concern. Nevertheless, some recent research findings suggest that citizen science programs should be supported by European institutions to resolve the credibility crisis of science, research, and evidence-based policy. In this talk, I will ask how we can build trust in (citizen) science. Is it possible to add credibility to science and “experts” by involving the general public? Are we overestimating citizens’ competencies? What impact do AI technologies have on the relationship between the general public and experts in citizen science, as well as the trustworthiness of data?

Marisa Ponti is Associate Professor in Informatics, Department of Applied IT at University of Gothenburg, Gothenburg, Sweden. Her current research includes machine-human integration in citizen science and the ethical challenges raised by using Artificial Intelligence in education. She is also interested in citizen-generated data and how they might serve the public good by contributing to policymaking and the public sector. She worked on digital transformation at the European Commission Joint Research Centre and recently she has been appointed member of the European Commission Expert Group called Innovation Friendly Regulations Advisory Group – IFRAG. The expert group will focus on the use of emerging technologies in support of the public sector to improve, optimize and innovate its operations and service provision. She also recently joined the Working Group Citiverse in EU set up by the European Commission Directorate-General for Communications Network Content and Technology (DG Connect) on initiating a co-creative process for delivering a Pre-standardisation roadmap on the CitiVerse.


 

 

Mihalis Kritikos

Research Ethics in Digital Sciences: the case of the EU’s Ethics Appraisal process

The presentation will shed light on the importance of establishing a distinct Ethics organizational/conceptual approach for research projects funded in the area of digital technologies, to enable the development of a human-centric, trustworthy and robust digital research ecosystem. To this end, for Horizon Europe, a first set of specialized guidance notes has been produced followed by several other organizational modalities.

Given the novelty of this set of technologies from a research ethics governance perspective, the session will discuss the particular needs for guidance, education, training and development of expertise. Furthermore, the challenges that are associated with the upcoming adoption of the EU Artificial Intelligence Act and its possible impact upon the design and implementation of research projects will be discussed. Particular attention will be given to the work of other international organisations in this particular field of research governance.

Dr Mihalis Kritikos is a Policy Analyst at the Ethics and Integrity Sector of the European Commission (DG-RTD) working on the ethical development of emerging technologies with a special emphasis on AI Ethics. Before that, he worked at the Scientific Foresight Service of the European Parliament as a legal/ethics advisor on Science and Technology issues (STOA/EPRS) authoring more than 50 publications in the domain of new and emerging technologies and contributing to the drafting of more than 15 European Parliament reports/resolutions in the fields of artificial intelligence, robots, distributed ledger technologies and blockchains, precision farming, gene editing and disruptive innovation. Mihalis is a legal expert in the fields of EU decision-making, legal backcasting, food/environmental law, the responsible governance of science and innovation and the regulatory control of new and emerging risks. He has worked as a Senior Associate in the EU Regulatory and Environment Affairs Department of White and Case, as a Lecturer at several UK Universities and as a Lecturer/Project Leader at the European Institute of Public Administration (EIPA). He also taught EU Law and Institutions for several years at the London School of Economics and Political Science (LSE) where he obtained a PhD in Technology Law (London School of Economics-LSE) that earned him the UACES Prize for the Best Thesis in European Studies in Europe.


 

 

Katharine Jarmul

Theory to Practice: My journey from responsible AI to privacy technologies

In this talk, I'll tell my personal journey from working on ethics in machine learning to a shift into practical use cases to deploy data privacy technologies. Using my story as an example, we'll explore themes of how theory can influence real work, and how theoretical work moves to practice in order to have impact. We'll also address how data privacy affects a wide variety of factors we consider important in ethical AI, and how stronger privacy practices can deliver better awareness and responsibility in the field of data science and machine learning.

Katharine Jarmul is a privacy activist and data scientist whose work and research focuses on privacy and security in data science workflows. She recently authored Practical Data Privacy for O'Reilly and works as a Principal Data Scientist at Thoughtworks. Katharine has held numerous leadership and independent contributor roles at large companies and startups in the US and Germany -- implementing data processing and machine learning systems with privacy and security built in and developing forward-looking, privacy-first data strategy.


 

 

Catherine Tessier

Dr. Catherine Tessier is a senior researcher at ONERA, Toulouse, France. Her research focuses on modelling ethical reasoning and on ethical issues related to the use of “autonomous” robots. She is also ONERA’s research integrity and ethics officer. She is a member of the French national ethics committee for digital technologies and a member of the ethics committee of the French ministry of defense. She is also a member of Inria’s Research Ethics Board (Coerle).


 

 

Rainer Mühlhoff

The risk of secondary use of trained ML models as a key issue of data ethics and data protection regarding AI

When it comes to data ethics and data protection in the context of machine learning and big data, we should stop focusing on the input stage (training data) and turn our attention to trained models. If the purpose of data protection is to redress power asymmetries between data processors and individuals/societies, trained models are the biggest blind spot in current regulatory projects. I argue that the mere *possession* of a trained model constitutes an enormous aggregation of informational power that should be the target of regulation and control even before the *application* of the model to concrete cases. This is because the model has the potential to be used and reused in different contexts with few legal or technical barriers, even as a result of theft or covert business activities. For example, a disease prediction model created for beneficial purposes in the context of medical research could be re-used by (or resold to) the insurance industry for price discrimination. The current focus of data protection on the input stage distracts from the - arguably much more serious - data protection issue related to trained models and, in practice, leads to a bureaucratic overload that harms the reputation of data protection by opening the door to the denigrating portrayal of data protection as an inhibitor of innovation.

Rainer Mühlhoff, philosopher and mathematician, is Professor of Ethics of Artificial Intelligence at the University of Osnabrück. He researches ethics, data protection and critical social theory in the digital society. In his interdisciplinary work, he brings together philosophy, media studies and computer science to investigate the interplay of technology, power and social change.


 

 

Kirstie Whitaker

She completed a PhD in Neuroscience at the University of California, Berkeley in 2012 and joined the Turing Institute as a Turing Research Fellow in 2017 from a postdoctoral fellowship at the University of Cambridge in the Department of Psychiatry. In 2020, she was promoted to Programme Lead for Tools, Practices and Systems, and in 2021 to Programme Director, reflecting the growth of this cross cutting programme. Kirstie is committed to realising the TPS community’s mission of investing in the people who sustain the open infrastructure ecosystem for data science. She is also the chair of the Turing Institute’s Ethics Advisory Group.


 

 

Daniela Tafani

What’s wrong with AI ethics narratives

In an increasing number of areas, judgments and decisions that have major effects on people's lives are now being entrusted to Machine Learning systems. The employment of these predictive optimisation systems inevitably leads to unfair, harmful and absurd outcomes: flaws are not occasional and cannot be prevented by technical interventions. Predictive optimisation systems do not work and violate legally protected rights. Fearing a blanket ban, Big Tech have responded with "AI ethics" narratives. The nonsense of decision-making based on automated statistics is thus presented as a problem of single and isolated biases, amendable by algorithmic fairness, ie, by technical fulfilment.

“AI ethics” narratives are based on imposture and mystification:  on a false narrative – which exploits three fundamental features of magical thinking – about what machine learning systems are and are not capable of actually doing, and on a misconception of ethics. These narratives achieve the goal, cherished by public and private oligarchies, of neutralizing social conflict by replacing political struggle with the promise of technology.

Daniela Tafani is a fixed-term researcher of Political Philosophy at the University of Pisa. She has been a research fellow at the University of Bologna. She is vice-president of the Italian Society of Kantian Studies (Società Italiana di Studi Kantiani). She is a member of the editorial board of the journal 'Bollettino telematico di filosofia politica' and of the Scientific Committee of the journal 'Zeitschrift für Rechtsphilosophie'. She is a member of the Italian Society for the Ethics of Artificial Intelligence. She has worked on Kantian moral philosophy (Virtù e felicità in Kant, Firenze, Leo S. Olschki, 2006), the philosophy of law in German idealism (Beiträge zur Rechtsphilosophie des deutschen Idealismus, Berlin, Lit Verlag, 2011), the relationship between ethics and law in the 20th century (Distinguere uno Stato da una banda di ladri. Etica e diritto nel XX secolo, Bologna, Il Mu- lino, 2014), and contemporary libertarian paternalisms. Her current research interests include Kant’s moral philosophy and the ethics and politics of artificial intelligence.


 

 

Romain Couillet

Why and how to dismantle the digital world?

It is well demonstrated today that the said "digital transition" is not a tool for the "ecological transition" but, quite the opposite, a major enabler of the Great Acceleration (and thus of the societal and ecological collapse). Yet, Modernity seems in complete denial, mostly through erroneous arguments of necessity (for military or health reasons) or fatalism ("there is no alternative"), of this catalytic harmfulness of the digital world. In this talk, I will restate the obvious: the socio-technical implications and the multiple locks intensified by the digital acceleration bring us all (human beings and other species) to a quick "dead-end" in the literal sense. I will then discuss the possible ways to accompany the "heritage and closure" of the digital world in parallel to the emergence of "convivial tools" and of "vernacular knowledge" (borrowing these notions to Ivan Illich). In a nutshell and to state it bluntly: where are we going?, where should we spend our energy?

Romain Couillet received his MSc in Mobile Communications at the Eurecom Institute and his MSc in Communication Systems in Telecom ParisTech, France in 2007. From 2007 to 2010, he worked with ST-Ericsson as an Algorithm Development Engineer on the Long Term Evolution Advanced project, where he prepared his PhD with Supelec, France, which he graduated in November 2010. From 2010 to 2020 he worked as an Assistant then Full Professor at CentraleSupélec, University of Paris Saclay. He is currently a full professor at the University Grenoble-Alps, France. His acamedic skills are in random matrix theory applied to statistics, machine learning, signal processing, and wireless communications. He is the recipient of the 2021 IEEE/SEE Glavieux award, of the 2013 French CNRS Bronze Medal in the section "science of information and its interactions", of the 2013 IEEE ComSoc Outstanding Young Researcher Award (EMEA Region), and of the 2011 EEA/GdR ISIS/GRETSI best PhD thesis award. He decided in 2021 to stop working in applied mathematics and dedicate his research time and efforts to redesigning research and teaching in preparation of the upcoming post-industrial era: education to the Anthropocene, the development of lowtech, ecology of dismantlement.


 

 

Alexei Grinbaum

Guidelines for AI Ethics in current EU research projects

I’ll review the ongoing debate on including specific AI ethics guidelines in the European ethics evaluation scheme. Based on Horizon-Europe TechEthos and iRECS projects, I’ll discuss possible improvements, including sectorial ethics guidelines (e.g., AI in healthcare or eXtended reality) and the necessary adaptation to the risk management system of the AI Act. I’ll also reflect on the benefits and limitations of the “ethics readiness levels” designed in MultiRATE project.

Alexei Grinbaum is senior research scientist at CEA-Saclay with a background in quantum information theory. He writes on ethical questions of emerging technologies, including nanotechnology, synthetic biology, robotics and AI. Grinbaum is the chair of the CEA Operational Ethics Committee for Digital Technologies and member of the French National Digital Ethics Committee (CNPEN). He contributes to several EU projects and serves as a central ethics expert to the European Commission. His books include "Mécanique des étreintes" (2014) and "Les robots et le mal" (2019), and "Parole de machines" (2023).


 

 

Clara Neppel

AI Governance: the role of regulation, standardization, and certification

Assuring that AI systems and services operate in accordance with agreed norms and principles is essential to foster trust and facilitate their adoption. Ethical assurance requires a global ecosystem, where organizations not only commit to upholding human values, dignity, and well-being, but are also able to demonstrate this when required by the specific context in which they operate. The presentation will focus on possible governance frameworks, including both regulatory and non-regulatory measures.

Dr Clara Neppel is the Senior Director European Business Operations at the IEEE Technology Centre GmbH in Vienna, where she is responsible for IEEE's activities and presence in Europe. IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. She holds a PhD in Computer Science from the Technical University of Munich and a master’s degree from the University of Strasbourg, in Intellectual Property Law and Management.

Dr. Neppel is also an Independent Director of the Supervisory Board of European Institute of Technology Digital, Co-Chair of the Advisory Board in the DAAD program "Konrad Zuse Schools of Excellence in Artificial Intelligence", Member of the Scientific-Industrial Advisory Board of Research Studios Austria FSG Strategy Board, and of the Independent Advisory Board of the UK RI Centre for Doctoral Training in Accountable, Responsible and Transparent AI.


 

 

Gabrielle Samuel

Reimagining research ethics to include environmental sustainability

Research that uses data-driven and artificial intelligence (AI) methods is associated with adverse environmental impacts. These include carbon dioxide emissions associated with the energy required to generate and process data, increasing resource use associated with the ever need for minerals to manufacture technological components, and electronic-waste (e-waste) that comes from the amount of digital devices constantly disposed. The detrimental health impacts from unsustainable mineral extraction and e-waste practices have been well-documented, as have the health-related consequences of climate change. Drawing on the concept of environmental justice, I argue that researchers have a moral obligation to consider the environmental impacts associated with their research endeavours as part of ethics best practice. To do this, I first describe the traditional approach to research ethics, which often relies on individualised notions of risk. I then argue that we need to broaden this notion of individual risk to consider issues associated with environmental sustainability. I finish by discussing some of the tensions that emerge when we consider this approach, and the challenges that need to be addressed moving forward.

Gabrielle (Gabby) is a Lecturer in Environmental Justice and Health at the Department of Global Health & Social Medicine, King’s College London, UK. She’s trained in sociology and ethics with a background in the life sciences and is particularly interested in research ethics and the environmental impacts of digital technologies. In her work, she draws on concepts of sustainability, justice, power, equity, and responsibility. She is principal investigator on a Wellcome project exploring the ethical and social issues associated with the environmental impacts of digital health research, as well as co-investigator on the EPSRC PARIS-DE project, which is co-designing a digital sustainability framework. She is funded by the MRC through various projects that explore environmental sustainability and digital technologies/health.


 

 

Kirstie Whitaker

Operationalising the SAFE-D principles for safe, ethical and open source AI

The SAFE-D principles (Leslie, 2019) were developed at the Alan Turing Institute, the UK's national institute for data science and artificial intelligence. They have been operationalised within the Turing's Research Ethics (TREx) institutional review process in a close collaboration between Public Policy researchers, open infrastructure practitioners in the Tools, Practices and Systems research programme, and our team of specialists in legal, governance and data protection in the Turing's Office of the General Counsel. We ask all research projects to share how their team considers the safety and sustainability, accountability, fairness and non-discrimination, and explainability and transparency of their work and outputs, including a consideration of data quality, integrity, protection and privacy. In this presentation Kirstie will share the TREx team's experiences embedding the principles at an institutional level alongside her broader advocacy for open research practices. Audience members will be invited to join the 800+ members of the open source, open collaboration, and community-driven Turing Way community. Developed on GitHub under open source licences, our shared goal is to provide all the information that researchers and data scientists in academia, industry and the public sector need to ensure that the projects they work on are easy to audit, reproduce and reuse.

Dr Kirstie Whitaker is a passionate advocate for making science "open for all" by promoting equity and inclusion for people from diverse backgrounds. She leads the Tools, Practices and Systems research programme at The Alan Turing Institute, the UK's national institute for data science and artificial intelligence. Kirstie founded and co-leads The Turing Way, an openly developed educational resource that enables researchers and citizen scientists across government, industry, academia and third sector organisations to embed open source practices into their work. She is a lead investigator of the £3 million AI for Multiple Long Term Conditions Research Support Facility, co-lead investigator for the AutSPACEs citizen science platform , and co-investigator of the Turing's Data Safe Haven projects. Kirstie is the chair of the Turing's Research Ethics Panel and leads a diverse team of research application managers and research community managers who build open source infrastructure to empower a global, decentralised network of people who connect data with domain experts. She holds a BSc in Physics from the University of Bristol, an MSc in Medical Physics from the University of British Columbia, and a PhD in Neuroscience from the University of California at Berkeley. Kirstie joined the Turing as a Turing Research Fellow in 2017 after a postdoctoral fellowship at the University of Cambridge in the Department of Psychiatry. She is a Fulbright scholarship alumna and was a 2016/17 Mozilla Fellow for Science.


 

 

Arlindo Oliveira

Artificial Consciousness: unreachable dream or foreseeable future?

Throughout the annals of human history, consciousness has been a subject of deep contemplation, shrouded in mystery and embroiled in controversy. It is a phenomenon both intimately familiar and astoundingly elusive to explain. Despite the multitude of theories that endeavor to elucidate the nature of consciousness – with a recent survey identifying no fewer than 29 actively researched propositions – certain paradigms have gained traction in recent decades. Among them are the Integrated Information Theory (IIT), Global Workspace and Global Neuronal Workspace Theories (GWT/GNW), and the Attention Schema Theory (AST). Complementing these is a class of more computationally oriented models, exemplified by the Conscious Turing Machine (CTM). Some of these theories align with our understanding of dual-process theories governing intelligent behavior, hinting at the prospect of a clearer comprehension of unconscious and conscious information processing within the human brain, and even the potential for replication in artificial systems. This presentation will explore the contemporary landscape of artificial intelligence architectures and their alignment with attributes commonly associated with conscious entities. Moreover, it will delve into the intriguing question of whether Large Language Models (LLMs) possess or can evolve to possess consciousness in the future, and the ramifications such an occurrence may entail.

Arlindo Oliveira was born in Angola and lived in Mozambique, Portugal, Switzerland, California, Massachusetts, and Japan. He obtained his BSc and MSc degrees from Instituto Superior Técnico (IST) and his PhD degree from the University of California at Berkeley. He is a distinguished professor of IST, president of the INESC group, member of the board of Caixa Geral de Depósitos, researcher at INESC-ID, and member of the National Council for Science, Technology and Innovation and of the Advisory Board of the Science and Technology Options Assessment (STOA) Panel of the European Parliament. He authored four books, translated into different languages, and hundreds of scientific and newspaper articles. He has been on the boards of several companies and institutions and is a past president of IST, of the Portuguese Association for Artificial Intelligence, and of INESC-ID. He was the head of the Portuguese node of the European Network for Biological Data (ELIXIR), visiting professor at MIT and at the University of Tokyo, and a researcher at CERN, INESC, Cadence Research Laboratories and Electronics Research Labs of UC Berkeley. He is a member of the Portuguese Academy of Engineering and a senior member of IEEE. He received several prizes and distinctions, including the Technical University of Lisbon / Santander prize for excellence in research, in 2009.


 

 

David Leslie

Scientific Discovery and Research Integrity in the Age of Large Language Models

he past few years have seen the rapid emergence of LLM applications that promise to accelerate scientific productivity and enhance the efficiency of research practice. These systems can carry out brute force text mining, “knowledge discovery”, and information synthesis and streamline scientific writing processes through the generation of literature reviews, article summaries, and academic papers. More recently, researchers have stitched together multiple transformer-based language models to create so-called “Intelligent Agents” which can perform “automated experiments” by searching the internet, combing through existing documentation, and planning and executing laboratory activities. In this talk, I explore some of the limitations of this new set of AI-supported affordances and reflect on their ethical implications for science as a community of practice. I argue that, amidst growing swells of magical thinking among scientists about the take-off of “artificial general intelligence” or the emergence of autonomous, Nobel Prize winning “AI scientists,” researchers need to take a conceptually sound, circumspect, and sober approach to understanding the limitations of these technologies. This involves understanding LLMs, in a deflationary way, as software-based sequence predictors. These systems produce outputs by drawing on the underlying statistical distribution of previously generated text rather than by accessing the evolving space of reasons, theories, and justifications in which warm-blooded scientific discovery takes place. Understanding this limitation, I claim, can empower scientists both to better recognise their own exceptional capacities for collaborative world-making, theorisation, interpretation, and truth and to better understand that contexts of scientific discovery are principal sites for human empowerment and for the expression of democratic agency and creativity.

David Leslie is the Director of Ethics and Responsible Innovation Research at The Alan Turing Institute and Professor of Ethics, Technology and Society at Queen Mary University of London. He previously taught at Princeton’s University Center for Human Values, Yale’s programme in Ethics, Politics and Economics and at Harvard’s Committee on Degrees in Social Studies, where he received over a dozen teaching awards including the 2014 Stanley Hoffman Prize for Teaching Excellence. David is the author of the UK Government’s official guidance on the responsible design and implementation of AI systems in the public sector, Understanding artificial intelligence ethics and safety (2019) and a principal co-author of Explaining decisions made with AI (2020), a co-badged guidance on AI explainability published by the UK’s Information Commissioner’s Office and The Alan Turing Institute. After serving as an elected member of the Bureau of the Council of Europe’s (CoE) Ad Hoc Committee on Artificial Intelligence (CAHAI) (2021-2022), he was appointed, in 2022, as Specialist Advisor to the CoE’s Committee on AI where he has led the writing of the zero draft of its Human Rights, Democracy and the Rule of Law Impact Assessment for AI, which will accompany its forthcoming AI Convention. He also serves on UNESCO’s High-Level Expert Group steering the implementation of its Recommendation on the Ethics of Artificial Intelligence. David’s recent publications include ‘Does the sun rise for ChatGPT? Scientific discovery in the age of generative AI’ published in the journal AI & Ethics, ‘The Ethics of Computational Social Science’, (2023) written for the European Commission Joint Research Centre/Centre for Advanced Studies, ‘Artificial intelligence and the heritage of democratic equity’ (2022) published by The Venice Commission of the Council of Europe, the HDSR articles “Tackling COVID-19 through responsible AI innovation: Five steps in the right direction” (2020) and “The arc of the data scientific universe” (2021) as well as Understanding bias in facial recognition technologies (2020), an explainer published to support a BBC investigative journalism piece that won the 2021 Royal Statistical Society Award for Excellence in Investigative Journalism.


 

 

Jason Pridmore

Resilience Amidst Complexity: Navigating Conflicts of Interest in the Digital Age

This paper defines a typology of conflicts of interest within contemporary research and how these intersect with new digital practices. Although there is significant evidence and cases in which conflicts of interest arise in research, defining the differences between these conflicts of interest is important for creating the means to mitigate these. We examine violations of research integrity, shortcomings in the peer review process and potentials for data misuse in turn and suggest specific mitigation strategies to overcome the concerns that arise within these types. These then are evaluated against new forms of technology development, most specifically in relation to AI, and how these forms of conflicts of interest are further problematised and potentially resolved in a digital context.

Jason Pridmore is the Vice Dean of Education at the Erasmus School of History, Culture, and Communication. He is the participant in a number of European research project and is the coordinator of the COALESCE project focused on Science Communication and the forthcoming SEISMEC project focused on Human Centric Industry.

Joint work with: Charlotte Bruns, postdoctoral researcher and lecturer at Erasmus University Rotterdam's School of History, Culture and Communication., with expertise in visual sociology and the theory and history of visual media, with a focus on visual practices in science communication; and Simone Driessen, assistant professor in Media and Popular Culture at Erasmus University Rotterdam, in the Erasmus School of History, Culture and Communication, whose research focuses on topics such as participatory practices of fandom, forms of cancel culture, and new and emerging forms of digital media use.