A detailed list of Speakers at the Beyond the Beyond Compliance 2023 is under construction on this page.
The EU’s AI Act: (self-)regulation, risk and public accountability
The EU is leading the charge on regulating AI. After years of ethics guidelines and self-regulation, it looks like we’ll finally have regulation to rein in the harms of AI systems. However, the regulation in question, the Artificial Intelligence Act, is essentially product safety regulation and will take a ‘risk-based approach’ to regulating AI, thus leaving significant gaps in terms of how it protects people’s rights. In this talk, I will discuss the shortcomings of the AI Act’s risk-based approach, how improvements are being introduced during negotiations, the gaps that will remain, and how it could impact people’s rights as well as ongoing research and development in the EU.
Daniel is a Senior Policy Analyst at Access Now’s Brussels office. His work focuses on the impact of emerging technologies on digital rights, with a particular focus on artificial intelligence (AI), facial recognition, biometrics, and augmented and virtual reality. While he was a Mozilla Fellow, he developed aimyths.org, a website that gathers resources to tackle myths and misconceptions about AI. He has a PhD in Philosophy from KU Leuven in Belgium and was previously a member of the Working Group on Philosophy of Technology at KU Leuven. He is also a member of the External Advisory Board of KU Leuven’s Digital Society Institute.
Excavating awareness and power for trustworthy data science
Researchers and data scientists using big, pervasive data about people face a significant challenge: navigating norms and practices for ethical and trustworthy data use. In response, the six-campus PERVADE project has conducted research with data scientists, data subjects, and regulators, and has discovered two entwined trust problems: participant unawareness of much research, and the relationship of social data research to corporate datafication and surveillance. In response, we have developed a decision support tool for researchers, inspired by research practices in a related but perhaps surprising research discipline: ethnography. This talk will introduce PERVADE’s research findings and the resulting decision support tool, and discuss ways that researchers working with pervasive data can incorporate reflection on awareness and power into their research.
Katie Shilton is an associate professor in the College of Information Studies at the University of Maryland, College Park. Her research focuses on technology and data ethics. She is a co-PI of the NSF Institute for Trustworthy Artificial Intelligence in Law & Society (TRAILS), and a co-PI of the Values-Centered Artificial Intelligence (VCAI) initiative at the University of Maryland. She was also recently the PI of the PERVADE project, a multi-campus collaboration focused on big data research ethics. Other projects include improving online content moderation with human-in-the-loop machine learning techniques; analyzing values in audiology technologies and treatment models; and designing experiential data ethics education. She is the founding co-director of the University of Maryland’s undergraduate major in social data science. Katie received a B.A. from Oberlin College, a Master of Library and Information Science from UCLA, and a Ph.D. in Information Studies from UCLA.
AI and Sustainability: Data, Models and (Broader) Impacts
AI models have a very tangible environmental impact in different stages of their life cycle, from manufacturing the hardware needed to train them to their deployment and usage. However, there is very little data about how their carbon footprint varies based on different architectures and sizes of models. In this talk, I will present work that was done as part of the Big Science project about measuring the carbon footprint of BLOOM, a 176-billion parameter open access language model, as well as complementary work on other models shared via the HuggingFace Hub.
Sasha Luccioni is a leading researcher in ethical artificial intelligence. Over the last decade, her work has paved the way to a better understanding of the societal and environmental impacts of AI technologies. She is a Research Scientist and Climate Lead at Hugging Face, a Board Member of Women in Machine Learning, and a founding member of Climate Change AI and she has been called upon by organizations such as the OECD, the United Nations and the NeurIPS conference as an expert in developing norms and best practices for a more sustainable and ethical practice of AI. Her academic research has been published in conferences and journals such as the IEEE, AAAI, the ACM, and JMLR.
LLMs, reproducibility and trust in scholarly work
Tony Ross-Hellauer is an expert in Scholarly Communication and Open Science. He is an information scientist (PhD, University of Glasgow) with a background in philosophy. Since 2019 he has led the Open and Reproducible Research Group at TU Graz, an interdisciplinary meta-research team investigating a range of issues related to open science evaluation, skills, policy, governance, monitoring and infrastructure. He was formerly Editor-in-Chief of the OA journal Publications, Scientific Manager for the EC Open Science infrastructure OpenAIRE, and co-founded the Transpose database of peer review and preprint policies. He is coordinator (PI) of the new EC-funded project TIER2, investigating the role of epistemic diversity in reproducibility and also led the EC-funded ON-MERRIT project, researching issues of equity in Open Science.
A right to reasonable inferences in the age of AI
This talk examines the problematic status of inferred data in EU data protection law and proposes a way to fix it. A new data protection right, the ‘right to reasonable inferences’, is needed to help close the accountability gap currently posed by AI, machine learning, and other data intensive technologies that can draw invasive inferences from non-intuitive sources. Inferred data can severely damage privacy or reputation and shape the opportunities available to individuals and groups. Despite often being predictive or opinion-based, it nonetheless is increasingly used in important decisions. The proposed right would require ex-ante justification to be given by the data controller to establish whether an inference is reasonable. This disclosure would address (1) why certain data form a normatively acceptable basis from which to draw inferences; (2) why these inferences are relevant and normatively acceptable for the chosen processing purpose or type of automated decision; and (3) whether the data and methods used to draw the inferences are accurate and statistically reliable.
Professor Brent Mittelstadt is Director of Research and Associate Professor at the Oxford Internet Institute, University of Oxford. He leads the Governance of Emerging Technologies (GET) research programme which works across ethics, law, and emerging information technologies. Professor Mittelstadt is the author of foundational works addressing the ethics of algorithms, AI, and Big Data; fairness, accountability, and transparency in machine learning; data protection and non-discrimination law; group privacy; and ethical auditing of automated systems.
In (Citizen) Science We Trust
Citizen science is increasingly recognized as advancing scientific research and social innovation by involving non-professional participants in gathering and analyzing empirical data.
Citizen science projects have been using technology such as online platforms, mobile apps, and sensors to make it easier for members of the public to participate and submit data. Now, the application of artificial intelligence (AI) technologies, including Machine Learning, computer vision, and Natural Language Processing, in citizen science is also growing rapidly. AI approaches are becoming more efficient and complex as they develop, and a number of fields, including astronomy, history and literature, environmental justice, ecology and biodiversity, biology, and neuroimaging, use them in varying combinations with citizen scientists. This can include collecting meaningful data from critical locations or sorting through large datasets to help solve problems ranging from the hyperlocal to the scale of the Universe. The trustworthiness of data collected by citizens has always been a concern. Nevertheless, some recent research findings suggest that citizen science programs should be supported by European institutions to resolve the credibility crisis of science, research, and evidence-based policy. In this talk, I will ask how we can build trust in (citizen) science. Is it possible to add credibility to science and “experts” by involving the general public? Are we overestimating citizens’ competencies? What impact do AI technologies have on the relationship between the general public and experts in citizen science, as well as the trustworthiness of data?
Marisa Ponti is Associate Professor in Informatics, Department of Applied IT at University of Gothenburg, Gothenburg, Sweden. Her current research includes machine-human integration in citizen science and the ethical challenges raised by using Artificial Intelligence in education. She is also interested in citizen-generated data and how they might serve the public good by contributing to policymaking and the public sector. She worked on digital transformation at the European Commission Joint Research Centre and recently she has been appointed member of the European Commission Expert Group called Innovation Friendly Regulations Advisory Group – IFRAG. The expert group will focus on the use of emerging technologies in support of the public sector to improve, optimize and innovate its operations and service provision. She also recently joined the Working Group Citiverse in EU set up by the European Commission Directorate-General for Communications Network Content and Technology (DG Connect) on initiating a co-creative process for delivering a Pre-standardisation roadmap on the CitiVerse.
Research Ethics in Digital Sciences: the case of the EU’s Ethics Appraisal process
The presentation will shed light on the importance of establishing a distinct Ethics organizational/conceptual approach for research projects funded in the area of digital technologies, to enable the development of a human-centric, trustworthy and robust digital research ecosystem. To this end, for Horizon Europe, a first set of specialized guidance notes has been produced followed by several other organizational modalities.
Given the novelty of this set of technologies from a research ethics governance perspective, the session will discuss the particular needs for guidance, education, training and development of expertise. Furthermore, the challenges that are associated with the upcoming adoption of the EU Artificial Intelligence Act and its possible impact upon the design and implementation of research projects will be discussed. Particular attention will be given to the work of other international organisations in this particular field of research governance.
Dr Mihalis Kritikos is a Policy Analyst at the Ethics and Integrity Sector of the European Commission (DG-RTD) working on the ethical development of emerging technologies with a special emphasis on AI Ethics. Before that, he worked at the Scientific Foresight Service of the European Parliament as a legal/ethics advisor on Science and Technology issues (STOA/EPRS) authoring more than 50 publications in the domain of new and emerging technologies and contributing to the drafting of more than 15 European Parliament reports/resolutions in the fields of artificial intelligence, robots, distributed ledger technologies and blockchains, precision farming, gene editing and disruptive innovation. Mihalis is a legal expert in the fields of EU decision-making, legal backcasting, food/environmental law, the responsible governance of science and innovation and the regulatory control of new and emerging risks. He has worked as a Senior Associate in the EU Regulatory and Environment Affairs Department of White and Case, as a Lecturer at several UK Universities and as a Lecturer/Project Leader at the European Institute of Public Administration (EIPA). He also taught EU Law and Institutions for several years at the London School of Economics and Political Science (LSE) where he obtained a PhD in Technology Law (London School of Economics-LSE) that earned him the UACES Prize for the Best Thesis in European Studies in Europe.
Theory to Practice: My journey from responsible AI to privacy technologies
In this talk, I'll tell my personal journey from working on ethics in machine learning to a shift into practical use cases to deploy data privacy technologies. Using my story as an example, we'll explore themes of how theory can influence real work, and how theoretical work moves to practice in order to have impact. We'll also address how data privacy affects a wide variety of factors we consider important in ethical AI, and how stronger privacy practices can deliver better awareness and responsibility in the field of data science and machine learning.
Katharine Jarmul is a privacy activist and data scientist whose work and research focuses on privacy and security in data science workflows. She recently authored Practical Data Privacy for O'Reilly and works as a Principal Data Scientist at Thoughtworks. Katharine has held numerous leadership and independent contributor roles at large companies and startups in the US and Germany -- implementing data processing and machine learning systems with privacy and security built in and developing forward-looking, privacy-first data strategy.
Dr. Catherine Tessier is a senior researcher at ONERA, Toulouse, France. Her research focuses on modelling ethical reasoning and on ethical issues related to the use of “autonomous” robots. She is also ONERA’s research integrity and ethics officer. She is a member of the French national ethics committee for digital technologies and a member of the ethics committee of the French ministry of defense. She is also a member of Inria’s Research Ethics Board (Coerle).
I’m a philosophy professor at the University of Osnabrück, Germany. My research focuses on the societal implications of artificial intelligence and digital media.
She completed a PhD in Neuroscience at the University of California, Berkeley in 2012 and joined the Turing Institute as a Turing Research Fellow in 2017 from a postdoctoral fellowship at the University of Cambridge in the Department of Psychiatry. In 2020, she was promoted to Programme Lead for Tools, Practices and Systems, and in 2021 to Programme Director, reflecting the growth of this cross cutting programme. Kirstie is committed to realising the TPS community’s mission of investing in the people who sustain the open infrastructure ecosystem for data science. She is also the chair of the Turing Institute’s Ethics Advisory Group.
Daniela Tafani is a fixed-term researcher of Political Philosophy at the University of Pisa. She has been a research fellow at the University of Bologna. She is vice-president of the Italian Society of Kantian Studies (Società Italiana di Studi Kantiani). She is a member of the editorial board of the journal 'Bollettino telematico di filosofia politica' and of the Scientific Committee of the journal 'Zeitschrift für Rechtsphilosophie'. She is a member of the Italian Society for the Ethics of Artificial Intelligence. She has worked on Kantian moral philosophy (Virtù e felicità in Kant, Firenze, Leo S. Olschki, 2006), the philosophy of law in German idealism (Beiträge zur Rechtsphilosophie des deutschen Idealismus, Berlin, Lit Verlag, 2011), the relationship between ethics and law in the 20th century (Distinguere uno Stato da una banda di ladri. Etica e diritto nel XX secolo, Bologna, Il Mu- lino, 2014), and contemporary libertarian paternalisms. Her current research interests include Kant’s moral philosophy and the ethics and politics of artificial intelligence.