Sidebar

  • Science
    • Projects
      • GLACIATION
      • NEPHELE
      • SMARTEDGE
      • CIRPASS
      • GATEKEEPER
      • SDN-microSENSE
      • TERMINET
      • TRAPEZE
      • WAI-COOP
      • WAI-GUIDE
      • HRADIO
      • MOSAICrOWN
      • SPECIAL
      • WAI-TOOLS
      • BOOST 4.0
      • Create-IoT
      • Data Market Services
      • Easy Reading
      • 3GWeb
      • ABCDE
      • ACGT
      • AMIDA
      • ANFAS
      • AXES
      • Beyond the Horizon
      • BigDataEurope
      • BlueBridge
      • COMPOSE
      • CoreGRID
      • CROSSOVER
      • CRUCID
      • CYCLADES
      • C-WEB
      • D4Science
      • D4ScienceII
      • DECAIR
      • D-CENT
      • DigitalWorld
      • DELOS
      • DELOS2
      • DILIGENT
      • DISCIPL
      • eAccess+
      • EchoGRID
      • ENGAGE
      • ESIMEAU
      • Euro-India Spirit
      • EU-NSF
      • Euro-India
      • fet11
      • Geo-Recap
      • Global ITV
      • Grid@Asia
      • GridCOMP
      • HTML5APPS
      • iMarine
      • INTERLINK
      • LIDER
      • MediaScape
      • MobiWeb2.0
      • MobiWebApp
      • MODTRAIN
      • MtoM3D
      • Multilingual Web
      • MUSCLE
      • Net-WMS
      • OMWeb
      • PALETTE
      • PaaSaage
      • PRIME
      • PrimeLife
      • PrivacyOS
      • Race Network RFID
      • RESET
      • SCHOLNET
      • SERENOA
      • Share PSI 2.0
      • SIMES
      • SmartOpenData
      • STREWS
      • TELEMAC
      • THETIS
      • VITALAS
      • VOICES
      • VPH
      • VRE4EIC
      • WADI
      • WAI-ACT
      • WAI-AGE
      • WAI-DEV
      • WEBINOS
      • EU-LA
      • AIOLIA
    • Working Groups
      • EOSC and GAIA-X
    • Cooperations and Partnerships
    • EOSC and GAIA-X
    • Beyond Compliance - Digital Ethics in Research
  • Home
  • Empowering People
    • Fellowship Programme
    • Programme for PhD Education
    • Cor Baayen Early Career Researcher Award
    • In-House Staff Exchange Programmes
    • Jobs
  • Beyond Compliance
    • Beyond Compliance - Forum Program
    • Registration - Forum on Digital Ethics in Research
    • Beyond Compliance 2022
    • Beyond Compliance 2023 - Speakers
    • Beyond Compliance 2023
    • Beyond Compliance 2024 - Speakers
    • Beyond Compliance 2024
    • Beyond Compliance 2025 - Speakers
  • Publications
    • ERCIM News
    • Strategic Reports
    • Leaflets and Brochures
    • Annual Report
  • About
    • Objectives
    • Membership
    • Member representation
    • ERCIM Office
    • Governance
    • ERCIM and W3C
    • Legal Info
    • Contact
    • Slides
    • Logos for Download
  • Latest news
  • Events
    • Upcoming ERCIM Events
    • First ERCIM-JST Joint Symposium
    • Workshop on Privacy, Transparency, Sovereignty and Security
      • Suggested hotels at Sophia
      • Venue at Sophia
    • Past ERCIM Events
    • ERCIM fall meetings 2018
    • 3rd Joint ERCIM-JST Workshop
    • FP HCTG event
    • TRAPEZE Webinar
    • ERCIM Days
      • ERCIM fall meetings 2023
      • ERCM spring meetings 2023
      • ERCIM fall meetings 2022
      • ERCIM fall meetings 2021
      • ERCIM spring meetings 2021
      • ERCIM fall meetings 2020
      • ERCIM spring meetings 2020
      • ERCIM 30th Anniversary and Fall Meetings
      • ERCIM spring meetings 2022
      • ERCIM spring meetings 2019
      • ERCIM spring meetings 2024
      • ERCIM fall meetings 2024
        • ERCIM fall meetings 2024 - registration
      • ERCIM 2025 spring meetings
        • ERCIM 2025 spring meetings registration form
    • 5th ERCIM/JST Joint Workshop 2024
      • ERCIM-JST workshop 2024 registration form
    • AIOTI Workshop on Semantic Interoperability for Digital Twins
    • Forum on Digital Ethics in Research
  • Intranet
    • BSCW
    • Wikis
    • Mailing Lists
    • Login
    • Shared Calendar
    • Absence
    • Webmail
ERCIM
  • Science
    • Projects
      • GLACIATION
      • NEPHELE
      • SMARTEDGE
      • CIRPASS
      • GATEKEEPER
      • SDN-microSENSE
      • TERMINET
      • TRAPEZE
      • WAI-COOP
      • WAI-GUIDE
      • HRADIO
      • MOSAICrOWN
      • SPECIAL
      • WAI-TOOLS
      • BOOST 4.0
      • Create-IoT
      • Data Market Services
      • Easy Reading
      • 3GWeb
      • ABCDE
      • ACGT
      • AMIDA
      • ANFAS
      • AXES
      • Beyond the Horizon
      • BigDataEurope
      • BlueBridge
      • COMPOSE
      • CoreGRID
      • CROSSOVER
      • CRUCID
      • CYCLADES
      • C-WEB
      • D4Science
      • D4ScienceII
      • DECAIR
      • D-CENT
      • DigitalWorld
      • DELOS
      • DELOS2
      • DILIGENT
      • DISCIPL
      • eAccess+
      • EchoGRID
      • ENGAGE
      • ESIMEAU
      • Euro-India Spirit
      • EU-NSF
      • Euro-India
      • fet11
      • Geo-Recap
      • Global ITV
      • Grid@Asia
      • GridCOMP
      • HTML5APPS
      • iMarine
      • INTERLINK
      • LIDER
      • MediaScape
      • MobiWeb2.0
      • MobiWebApp
      • MODTRAIN
      • MtoM3D
      • Multilingual Web
      • MUSCLE
      • Net-WMS
      • OMWeb
      • PALETTE
      • PaaSaage
      • PRIME
      • PrimeLife
      • PrivacyOS
      • Race Network RFID
      • RESET
      • SCHOLNET
      • SERENOA
      • Share PSI 2.0
      • SIMES
      • SmartOpenData
      • STREWS
      • TELEMAC
      • THETIS
      • VITALAS
      • VOICES
      • VPH
      • VRE4EIC
      • WADI
      • WAI-ACT
      • WAI-AGE
      • WAI-DEV
      • WEBINOS
      • EU-LA
      • AIOLIA
    • Working Groups
      • EOSC and GAIA-X
    • Cooperations and Partnerships
    • EOSC and GAIA-X
    • Beyond Compliance - Digital Ethics in Research
  • Home
  • Empowering People
    • Fellowship Programme
    • Programme for PhD Education
    • Cor Baayen Early Career Researcher Award
    • In-House Staff Exchange Programmes
    • Jobs
  • Beyond Compliance
    • Beyond Compliance - Forum Program
    • Registration - Forum on Digital Ethics in Research
    • Beyond Compliance 2022
    • Beyond Compliance 2023 - Speakers
    • Beyond Compliance 2023
    • Beyond Compliance 2024 - Speakers
    • Beyond Compliance 2024
    • Beyond Compliance 2025 - Speakers
  • Publications
    • ERCIM News
    • Strategic Reports
    • Leaflets and Brochures
    • Annual Report
  • About
    • Objectives
    • Membership
    • Member representation
    • ERCIM Office
    • Governance
    • ERCIM and W3C
    • Legal Info
    • Contact
    • Slides
    • Logos for Download
  • Latest news
  • Events
    • Upcoming ERCIM Events
    • First ERCIM-JST Joint Symposium
    • Workshop on Privacy, Transparency, Sovereignty and Security
      • Suggested hotels at Sophia
      • Venue at Sophia
    • Past ERCIM Events
    • ERCIM fall meetings 2018
    • 3rd Joint ERCIM-JST Workshop
    • FP HCTG event
    • TRAPEZE Webinar
    • ERCIM Days
      • ERCIM fall meetings 2023
      • ERCM spring meetings 2023
      • ERCIM fall meetings 2022
      • ERCIM fall meetings 2021
      • ERCIM spring meetings 2021
      • ERCIM fall meetings 2020
      • ERCIM spring meetings 2020
      • ERCIM 30th Anniversary and Fall Meetings
      • ERCIM spring meetings 2022
      • ERCIM spring meetings 2019
      • ERCIM spring meetings 2024
      • ERCIM fall meetings 2024
        • ERCIM fall meetings 2024 - registration
      • ERCIM 2025 spring meetings
        • ERCIM 2025 spring meetings registration form
    • 5th ERCIM/JST Joint Workshop 2024
      • ERCIM-JST workshop 2024 registration form
    • AIOTI Workshop on Semantic Interoperability for Digital Twins
    • Forum on Digital Ethics in Research
  1. Home
  2. Science
  3. Projects
  4. MainContent
  5. Activity
  6. Ethics WG
  7. 2024
   

2024

Beyond Compliance 2024 - Speakers

Details
Last Updated: 14 October 2024


 

Julian Nida-Rümelin - LMU Munich and Humanistische Hoschule Berlin

Beyond Compliance: Digital Humanism

Compliance is necessary, but not sufficient. Digital transformation is accompanied by an AI Ideology that endangers both: the humanistic essence of democracy and the technological progress. The counterpart is Digital Humanism that defends the human condition against transhumanistic transformations and animistic regressions. Humanism in ethics and politics strives at extending human authorship by formation and social policy. Digitization changes the technological conditions of human practice but does not transform humans in cyborgs or establish machines as persons. Digital humanism rejects transhumanistic an animistic perspective alike, it rejects the idea of homo deus, the human god that creates e-persons intended as friends or unintended as enemies.
In my talk I will outline the basic ideas of digital humanism and draw some ethical and political conclusions

https://julian-nida-ruemelin.com/en/


 

Milad Doueihi

Beyond Intelligence: Imaginative Computing. A Minority report.

Since the Dartmouth Summer Proposal until its most recent incarnation under the guise of Generative models, Computation has been caught in a trap that has shaped both its history as well as its reception (from the various schools of AI to the evolution of Computational Ethics not to say anything concerning the proliferation of regulatory efforts…), a history grounded in a comparative model that supposedly informs our understanding and representations of intelligence. But what if that is precisely the source of problem? What if the roads not taken (full formal learning models and their potential impact on cultural transmission in general, “imaginative thinking” to quote the Dartmouth Proposal [Paragraph 7] instead of intelligence, and the avoidance of ethics as a potential answer or solution, the quasi-religious forms of beliefs attached to the current model, etc.) point to more productive and less destructive paths? A minority view for sure, one that, despite what would appear as simply a futile effort, that calls for abandoning Intelligence and opting for more realistic and manageable alternatives.

Milad Doueihi (retired). Forthcoming: Les maîtres voraces de l’intelligence (Seuil, 2025), La rage secrète de l’étranger (Seuil) and Un vocabulaire des institutions computationnelles. Hommage à Émile Benveniste (MK Éditions, 2025).


 

Ferran Argelaguet - Inria, France

Ethical Considerations of Social Interactions in the Metaverse

META-TOO is a Horizon Europe project that aims to address gender-based inappropriate social interactions in the Metaverse by integrating neuroscience, psychology, computer science, and ethics. The project investigates how users perceive and manage virtual harassment in social VR environments, focusing on avatar characteristics, social contexts, and environmental factors. It also explores the role of perspective-taking and bystander behavior to mitigate harassment. META-TOO raises significant ethical challenges, including concerns about participant exposure, cultural differences, data privacy, and the potential for unintended consequences. This talk will discuss these ethical issues and how the project will tackle these challenges.

Ferran Argelaguet is a research scientist (CRCN) at the Hybrid Team at IRISA/Inria Rennes. He received his PhD in Computer Science from the Universitat Politècnica de Catalunya on 2011. His research activity is devoted on the field of 3D User Interfaces (3DUI) which is multidisciplinary research field involving Virtual Reality, Human Computer Interaction, Computer Graphics, Human Factors, Ergonomics and Human Perception. His research is structured under three major research axis: understand human perception in virtual reality systems, improve VR interaction methods leveraging the human perceptual and motor constraints, and enrich VR interaction by exploiting user’s mental and cognitive states.

https://team.inria.fr/hybrid/author/fargelag


 

Marianna Capasso - Utrecht University

Algorithmic Discrimination in Hiring: A Cross-Cultural Perspective

There are over 250 Artificial Intelligence (AI) tools for HR on the market. Algorithmic hiring technologies include tools like algorithms that extract information from CVs; video interviews for screening candidates; search, ranking, and recommendation algorithms; and many others. But if on one hand algorithmic hiring might increase recruitment efficiency, since it reduces costs and time related to sourcing and screening of job applicants, on the other hand it might also perpetuate discrimination and systematic disadvantages for marginalised and vulnerable groups in society. The recent case of the Amazon CV-screening system is exemplar, as the system was found to be trained on biased historical data that led to a preference for men based on the fact that, in the past, the company hired more men as software engineers than women. But what exactly makes (the use of) an algorithm discriminatory? The nature of discrimination is controversial, since there are many forms of discrimination and it is not clear whether they are all morally wrong, nor is it clear why they are morally problematic and unfair. When it comes to algorithmic discrimination, and to the question of what counts as ‘high-quality’ data to improve diversity and variability of training data, things are even more complicated. This talk aims to clarify the current state of research related to these points and provide a cross-cultural digital ethics perspective on the question of algorithmic discrimination in hiring.

Marianna Capasso (she/her) is PostDoctoral Researcher in AI Ethics at Utrecht University. At UU she works in the intercultural digital ethics team of the EU-funded FINDHR project, which deals with intersectional discrimination in algorithmic hiring. Prior to this, Marianna was PostDoctoral Researcher at Erasmus School of Philosophy of Erasmus University Rotterdam, and PostDoctoral Researcher at Sant’Anna School of Advanced Studies in Pisa, where she obtained her PhD in Human Rights and Global Politics in 2022. Her main research interests lie at the intersection of philosophy of technology and political philosophy, with a special focus on topics such as Responsibility with AI, Meaningful Human Control, and AI and the Future of Work.

https://www.uu.nl/staff/MCapasso


 

Rockwell F. Clancy - Virginia Tech

Towards a culturally responsive, psychologically realist approach to global AI (artificial intelligence) ethics

Although global organizations and researchers have worked on the development and implementation of AI, market concentration has occurred in only a few regulatory jurisdictions. As such, it is unclear whether the ethical perspectives of global populations are adequately addressed in AI technologies, research, and policies to date. Addressing these gaps, this article claims AI ethics initiatives have tended to be (1) “culturally biased,” based on narrow ethical values, principles, and frameworks, poorly representative of global populations and (2) “psychologically irrealist,” based on mistaken assumptions regarding how mechanisms of normative thought and behaviors work. Effective AI depends on responding to different ethical perspectives, but frameworks for ensuring ethical AI remain largely disconnected from empirical insights about and methods for exploring ethics empirically and culturally. A truly global approach to AI ethics depends on understanding how people actually think about issues of right and wrong and behave (psychologically realist), and how culture affects these judgments and behaviors (culturally responsive). Neither can approaches to AI ethics be culturally responsive without being psychologically realist, we claim, nor can they be psychologically realist without being culturally responsive. This paper will sketch the motivations for and nature of a psychologically realist, culturally responsive approach to global AI ethics.

Rockwell Clancy conducts research at the intersection of technology ethics, moral psychology, and China studies. He explores how culture and education affect moral judgments, the causes of unethical behaviors, and what can be done to ensure more ethical behaviors regarding technology. Central to his work are insights from and methodologies associated with the psychological sciences and digital humanities. Rockwell is a Research Scientist in the Department of Engineering Education at Virginia Tech and Chair of the Ethics Division of the American Society for Engineering Education. Before moving to Virginia, he was a Research Assistant Professor in the Department of Humanities, Arts, and Social Sciences at the Colorado School of Mines, Lecturer in the Department of Values, Technology, and Innovation, at Delft University of Technology, and an Associate Teaching Professor at the University of Michigan-Shanghai Jiao Tong University Joint Institute. Rockwell holds a PhD from Purdue University, MA from Katholieke Universiteit, Leuven, and BA from Fordham University.

http://www.rockwellfclancy.com/index.html


 

Michael Fisher - University of Manchester, UK

Responsible Autonomy

I am going to briefly talk about several dimensions of “responsibility” relating to autonomous systems.
We are increasingly developing “autonomous” systems that make their own decisions, and take their own actions, without direct human oversight. These systems often involve AI and/or Robotics. However, we must ensure that the independent decision-making in these autonomous systems can be guaranteed to be safe, ethical, and reliable. Too much development, and even deployment, fails to guarantee these aspects. In addition, if we (users) are to trust autonomous systems we need them to be constructed so that their behaviour and decisions are transparent and, crucially, their reasons for making those decisions are transparent and verifiable.

It is our role to design, develop, and deploy systems responsibly. This includes not only ensuring the task of the system is clear, but ensuring the system carries out this task both reliably and safely. Furthermore, We must be very clear what assumptions we make about the environment in which these systems are to be deployed. Often AI/Autonomous/Robotic systems are designed under significant assumptions, which are violated once the “real world” is encountered.

The final dimension I will highlight concerns sustainability, especially environmental sustainability. Clearly, developing and deploying technology is not without environmental cost. We must be clear about the environmental issues and must ensure that the deployment of the technology provides a “net positive”. These issues are obvious in the context of robot construction but the environmental costs of AI, especially data-driven machine learning, have been often overlooked. The vast environmental impact of these tools should be taken in to account before design and deployment.

Michael Fisher is a Professor of Computer Science, and Royal Academy of Engineering Chair in Emerging Technologies, at the University of Manchester. His research concerns autonomous systems, particularly verification, software engineering, self-awareness, and trustworthiness, with applications across robotics and autonomous vehicles. Increasingly, his work encompasses not just safety but broader ethical issues such as sustainability and responsibility across these (AI, Autonomous Systems, IoT, Robotics, etc) technologies.

Fisher chairs the British Standards Institution Committee on Sustainable Robotics, co-chairs the IEEE Technical Committee on the Verification of Autonomous Systems, and is a member of both the IEEE P7009 Standards committee on “Fail-Safe Design of Autonomous Systems” and the Strategy Group of the UK’s Responsible AI programme.

He is currently on secondment (for 2 days per week) to the UK Government’s Department for Science, Innovation and Technology [https://www.gov.uk/dsit] advising on issues around AI and Robotics.

https://web.cs.manchester.ac.uk/~michael


 

Nikolaus Forgo - Universität Wien

Giving an historical and critical overview on European attempts to regulate digitalisation

This presentation will give a historical and critical overview on European attemtps to regulate digitalisation consistently and convincingly. We will focus, in particular, on GDPR, Data Act, Copyright Law and the AI act. From this perspective we will assess in some more detail the interplay between AI, ethics and law and will ask whether Fundamental Rights Impact assessments are a useful tool for ethical governance of research.

Nikolaus Forgó studied law in Vienna and Paris from 1986-1990 and then worked as university assistant at the Faculty of Law at the University of Vienna. In 1997, he received his doctorate in law with a dissertation on legal theory. Since October 1998, he has been head of the university course for information and media law at the University of Vienna, which still exists today. From 2000 to 2017, he was Professor of Legal Informatics and IT Law at the Faculty of Law at Leibniz Universität Hannover, where he headed the Institute for Legal Informatics for 10 years and was also Data Protection Officer and CIO.
Since October 2017, he has been Professor of Technology and Intellectual Property Law at the University of Vienna and Director of the Department of Innovation and Digitalisation in Law at the same university. He is also an honorary expert member of the Austrian Data Protection Council and the Austrian AI Advisory Board.

https://id.univie.ac.at/en/team/univ-prof-dr-nikolaus-forgo/


 

Alexei Grinbaum - CEA-Saclay, France, and Horizon Europe iRECS project

Tutorial - Training in AI ethics: concepts, methods, exercises, problems

Alexei Grinbaum is senior research scientist at CEA-Saclay with a background in quantum information theory. He writes on ethical questions of emerging technologies, including robotics and AI. Grinbaum is the chair of the CEA Operational Ethics Committee for Digital Technologies and member of the French National Digital Ethics Committee (CNPEN). He coordinates and contributes to several EU projects and serves as Ethics Chair to the European Commission. His books include “Mécanique des étreintes” (2014), “Les robots et le mal” (2019), and “Parole de machines” (2023).


 

Attila Gyulai - HUN-REN

Misled by autonomy: AI and contemporary democratic challenges

This presentation discusses the hopes and fears regarding the impact of AI on democracy by focusing on the misunderstood role of autonomy within the democratic process. In standard democratic theory, autonomy refers to the capacity and normative requirement of self-government. It will be argued that both democratic scholarship and policy documents seem unprepared to consider the inclusion and intrusion of AI into democracy. Democratic autonomy means that the people possess the power of self-legislation; they are the authors of public norms. Autonomy therefore presupposes that the formation of preferences is free from any undue interference. It is often claimed that AI is a threat to democracy because its various applications bring about precisely this undue interference by taking over the selection and dissemination of information necessary for people’s autonomous decision-making, through algorithmic governance that limits the scope of self-governance, and by treating citizens as sources for data-driven campaigns that undermine the role of deliberation and preference formation. There is an expectation that even if AI fulfils a variety of tasks in the democratic process, the ultimate control over everything it is allowed to do must remain with and be exercised by the people themselves, based on the autonomous will of the individual. The presentation offers a critical review of democratic theory by focusing on the points at which AI enters the democratic process (AI-driven platforms, algorithmic governance, democratic oversight of decision-making, democratic preference formation, the desired consensual outcome of the democratic process) to show that AI does not threaten the autonomous self-government of the people because the latter is merely an ideal that cannot realistically be expected to ground democracy. If the untenability of this expectation is ignored, neither the real impact of AI nor the necessary measures (guidelines, principles, policy proposals) can be assessed. Based on a critical reading of the discourse, it will be argued that any attempt to reconcile AI with democracy must address the constraints of autonomy and self-governance in any democracy in order to provide meaningful responses to the challenges facing all present and future democracies.

Attila Gyulai is a senior research fellow at the HUN-REN Centre for Social Sciences, Budapest and associate professor at Corvinus University of Budapest. His research interests include realist political theory, democratic theory, the political theory of Carl Schmitt and the political role of constitutional courts. His work has been published in journals such as Journal of Political Ideologies, East European Politics, Griffith Law Review, German Law Journal, and Theoria. He is co-author of the monograph The Orban Regime – Plebiscitary Leader Democracy in the Making.

https://tk.hun-ren.hu/en/researcher/gyulai-attila


 

Natali Helberger - University of Amsterdam

AI everywhere and anytime in the media. Will the AI Act save democracy?

In my presentation I will discuss challenges and opportunities of the use of Generative AI in the media for democracy, and the role of the AI Act in creating reliable safeguards for fundamental rights and freedom of expression. 

Natali Helberger is University Professor of Information Law and Digital Technology at the University of Amsterdam and a member of the Executive Board of the Institute for Information Law (IViR). Helberger is an elected member of the Royal Holland Society of Sciences (KHMW), the Royal Netherlands Academy of Arts and Sciences (KNAW) and the Social Science Council of the KNAW. Her research on AI and automated decision systems focuses on its impact on society and governance. Helberger co-founded the Research Priority Area 'Human(e) AI' at the UvA. Helberger is also founder and co-director of the AI, Media & Democracy Lab, and since 2022 she has been scientific director of the AlgoSoc (Public Values in the Algorithmic Society) Gravity Consortium. A key focus of the Algosoc programme is to mentor and train the next generation of interdisciplinary researchers. She is a member of several national and international research groups and committees, including the Council of Europe's Expert Group on AI and Freedom of Expression and the AI Office's Working Group on creating a Code of Conduct for Generative AI. 

https://www.uva.nl/en/profile/h/e/n.helberger/n.helberger.html


 

Bjorn Kleizen - University of Antwerp

Do citizens trust trustworthy artificial intelligence? Examining the limitations of ethical AI measures in government

The increasing role of AI in our societies poses important questions for public services. On the one hand, AI provides a tool to improve public services. On the other, various AI technologies remain controversial, raising the question to what extent citizens trust public sector uses of AI. Although trust in AI and ethical AI have both become prominent research fields, it is notable that most research undertaken up until now focuses solely on the users of AI systems. We argue that, in the public sector, non-user citizens are a second vital stakeholder whose trust should be maintained. Large groups of citizens will never interact with public sector AI models that operate behind the scenes, forcing citizens to make trust evaluations based on limited information, hearsay and heuristics. Simultaneously, their attitudes will have an important impact on the legitimacy that public sector organizations have to develop and implement AI systems. Thus, unlike previous work on direct users of AI, our studies are mainly focused on the general public. We present results from 2 Belgian survey experiments and 17 semi-structured interviews conducted in Belgium and the Netherlands. Together, these studies suggest that trust among non-users is substantially less malleable than among direct users, as new information on AI projects’ trustworthiness is largely interpreted in line with pre-existing attitudes on government, privacy and AI.

Bjorn Kleizen is a postdoctoral researcher at the University of Antwerp, Department of Political Science, GOVTRUST Centre of Excellence. His work mainly focuses on the psychology of citizen-state interactions. Kleizen has previously completed projects on citizen trust in public sector AI systems, and is currently examining citizen attitudes on scandals exacerbated by public sector automation, e-government and/or AI.

https://www.uantwerpen.be/en/staff/bjorn-kleizen/research/


 

Anatole Lécuyer - Inria Rennes/IRISA

Paradoxical effects of virtual reality

Virtual reality technologies are often presented as the ultimate innovative interaction media for interacting with digital content online. When we put on a virtual reality headset for the first time, we are gripped by the power of sensory immersion. Many positive applications then come to mind, such as for health, education, training, access to cultural heritage, or teleconferencing and teleworking. But these technologies also raise fears and dangers of various kinds, whether for the physical or psychological integrity of users, or for their privacy. In this presentation, we will first review the main concepts and psychological effects associated with immersive technologies. Then, we will focus on the notion of avatar or virtual embodiment in virtual worlds, to show how these powerful effects can be used to good or bad effect, and lead to sometimes paradoxical effects that we need to be more aware of to be able to control them better in the future.

Anatole Lécuyer is Director of Research and Head of Hybrid research team, at Inria, the French National Institute for Research in Computer Science and Control, in Rennes, France. His research interests include: virtual reality, haptic interaction, 3D user interfaces, and brain-computer interfaces (BCI). He served as Associate Editor of “IEEE Transactions on Visualization and Computer Graphics”, “Frontiers in Virtual Reality” and “Presence” journals. He was Program Chair of IEEE Virtual Reality Conference (2015-2016) and General Chair of IEEE Symposium on Mixed and Augmented Reality (2017) and IEEE Symposium on 3D User Interfaces (2012-2013). He is author or co-author of more than 200 scientific publications. Anatole Lécuyer obtained the Inria-French Academy of Sciences “Young Researcher Prize” in 2013, the IEEE VGTC “Technical Achievement Award in Virtual/Augmented Reality” in 2019, and was inducted in the inaugural class of the IEEE Virtual Reality Academy in 2022.

https://people.rennes.inria.fr/Anatole.Lecuyer/


 

Anna Ujlaki - HUN-REN/Eötvös Loránd University

Regulating Artificial Intelligence: A Political Theory Perspective

In the face of unprecedented advancements in artificial intelligence (AI), this presentation explores how AI is reshaping society, politics, and the foundational values of democracy. The aim of the presentation is to provide a critical review of the discourse about the political theory of AI, highlighting the strengths and weaknesses of contemporary normative discussions. It critically investigates the discourse across four key aspects. Firstly, it addresses the conceptual questions that must be resolved before making any normative claims or judgments. Given that normative political theoretical concepts are often contested, the presentation argues that there is a path dependence in the literature, influenced by the definitions adopted for fundamental concepts. This is particularly relevant to discussions on the relationship between (liberal) democracy and AI. Secondly, from a normative perspective, the focus shifts to the norms, values, and standards we expect from the implementation of AI to certain social and political contexts, and those perceived as being threatened by its emergence. This perspective emphasizes not only the importance of values such as autonomy, transparency, human oversight, safety, privacy, and fairness in AI regulation but also those often overlooked in social scientific literature on AI, such as non-domination, vulnerability, dependency, and care, which are significant in both human–human and human–machine relationships. Thirdly, the presentation examines the potential of various political theoretical approaches, including liberal, republican, realist, and feminist perspectives, to address the challenges posed by AI. Fourthly, it considers the level of abstraction of the debate, questioning whether the normative arguments and explanations in the literature are directed at issues related to narrow AI, artificial general intelligence (AGI), or both. In conclusion, while some normative arguments, such as those concerning AI regulation, are relatively well-developed, the presentation aims to highlight the gaps in the literature, suggesting the need for further exploration of the normative framework in discussions about AI.

Anna Ujlaki is a junior research fellow at the HUN-REN Centre for Social Sciences, Budapest and she is an assistant professor at the Institute of Political and International Studies at Eötvös Loránd University. Her research focuses on the political theory of migration, political obligation, and artificial intelligence, incorporating perspectives from liberal, feminist, realist, and republican political theories.

https://annaujlaki.com/


 

Siddharth Peter de Souza - Tilburg University/Warwick University

Norm making around data governance: proposals for red lines

In my presentation, I will explore different types of proposals that can establish norms that ban illegitimate data-related practices at scale, given the global policy consensus that data must flow, and are a necessary basis for innovation. Through an study of work conducted by civil society organisations, and social movements, the presentation will discuss what kind of global red lines do we need for data to prevent its extractive and exploitative use.

Siddharth Peter de Souza is the founder of Justice Adda, a law and design social venture in India, and an incoming Assistant Professor at the University of Warwick from January 2025. He was a post-doctoral researcher at the Global Data Justice project at Tilburg University and is now an affiliated researcher.

https://www.tilburguniversity.edu/staff/s-p-desouza


 

Jean-Bernard Stefani - Inria

Taking Conviviality Seriously

In the early 1970s, Ivan Illich proposed a critical analysis of technology that appears remarkably cogent for understanding the moral and political woes that plague our current digital societies. This talk will aim to substantiate this claim and suggest potential avenues for research on convivial computing.

Jean-Bernard Stefani is a senior scientist at INRIA, the French National Research Institute in Computer Science and Control, where he has led the Sardes team on distributed systems engineering and the Spades team on formal methods for embedded systems, and was a past director of research of the INRIA Grenoble-Alpes research center. Prior to INRIA, he worked for 15 years at the French National Center for Telecommunications Research (CNET), where he led research on distributed computer systems. He is currently involved in the creation of a new research team at INRIA
on convivial computing.

https://team.inria.fr/spades/jean-bernard-stefani/


 

Melodena Stephens - Mohammed Bin Rashid School of Government

Approaching the Regulatory Event Horizon: Opportunities and Challenges

The pace of AI adoption is so rapid that the regulatory apparatus is unable to keep up. Part of the challenge is the complex regulatory process. This puts pressure on the individuals in a society and private actors to self-regulate. Further even if there are robust regulations, there are challenges with regulatory manoeuvrability and agility of governments to manage new technologies. For example, one debate in AI circles is should we regulate the technology or the industry? Another challenge is the impact of these policies. It has become fashionable for academics to suggest policy reforms as a few concluding paragraph of their journal articles but this is not enough as the process of advocacy is long and often negotiated. 

Bio: Prof. Dr. Melodena Stephens has over three decades of senior management experience across Asia, Europe, the Americas, and Africa. She consults and trains in strategy, focusing on technology governance, Science, Technology, and Innovation strategy, brand-building, agile government, and crisis management.  As Professor of Innovation & Technology Governance, she works with entities like the IEEE SA, the Council of Europe, Agile Nations, World Government Summit, World Economic Forum and senior government leaders from across the world. Her recent two books: Anticipatory Governance: Shaping a responsible Future and AI Enabled Business: A Smart Decision Kit. Melodena loves to write and blogs at www.melodena.com.


 

Rebecca Stower - KTH Royal Institute of Technology in Stockholm

Good Robots Don’t Do That: Making and Breaking Social Norms in Human-Robot Interaction

Robots are becoming increasingly present in both public and private spaces. This means robots have the potential to both shape and be shaped by human social norms and behaviours. These interactions span from inherently goal or task based to socially oriented. As such, people have different expectations and beliefs about how robots should behave during their interactions with humans. The field of human-robot interaction therefore focuses on understanding how features such as the robot’s appearance and behaviour influence people’s attitudes and behaviours towards these (social) robots.

Nonetheless, despite recent technological advances, robot failures remain inevitable. Robot failures in real-life, uncontrolled interactions are even more inevitable. With the rapid rise of large language models (LLMs) and other AI-based technologies, we are also beginning to see AI systems embedded into physical robots. Many of the potential pitfalls that have been highlighted with AI or virtual assistants apply equally to robots as well. When designing social robots, it is imperative that we ensure they do not reinforce or perpetuate harmful stereotypes or behaviours. In this talk, I will cover how and why different kinds of robot failures occur, and how we can use our understanding of these failures to work towards the design of more responsible and ethical social robots.

Rebecca Stower is a postdoctoral researcher at the Division of Robotics, Perception, and Learning at KTH. Her background is in experimental and social psychology. She uses psychological theories and measurement to inform the design, development, and testing of various robots, including humanoid social robots, drones, and robot arms. Her research focuses on human-robot-interaction (HRI), and especially what happens when robots fail and how this influences factors such as trust and risk-taking. More generally she is passionate about open science and psychological measurement within HRI.

https://becbot.github.io/


 

Elias Fernández Domingos - VUB Brussels, Belgium

Delegation to AI Agents

The important contributions of Elinor Ostrom have identified several mechanisms that enable the correct management of local commons (e.g., community monitoring, punishment, institutions, voting). These mechanisms provide a social barrier to support sustainable decisions and prevent those that will have a future negative effect on society. Nevertheless, the spread of intelligent systems and artificial intelligence (AI) has affected significantly, not only the way humans acquire and share information, but also the way we make informed decisions regarding critical social questions such as climate action, sustainability, or compliance with health measures. In this talk, I will introduce the key factors that differentiate delegation to AI from delegation to other human beings, and highlight both the challenges and the potential opportunities that a hybrid human-AI society offers for solving important societal issues. I will close the talk with the results of an experiment that shows the potential of delegation to AI as a commitment device that can enable pro-social behaviours.

Elias Fernández Domingos is currently a Senior Researcher (FWO fellow) at the VUB – Brussels, Belgium. He is interested in the origins of cooperation in social interactions and how can we maintain it in an increasingly complex and hybrid human-AI world. In his research, he applies concepts and methods from (Evolutionary) Game Theory, Behavioural Economics, and Machine Learning to model collective (strategic) behaviour and validates it through behavioural economic experiments. He is the creator of EGTtools, a Python/C++ toolbox for Evolutionary Game Theory.

https://ai.vub.ac.be/team/elias-fernandez/


Beyond Compliance 2024

Details
Last Updated: 06 June 2025
Beyond-compliance2023-logo

Digital Ethics in Research

Previous editions: 2023 | 2022

Following edition: 2025

Research Ethics in the Digital Age

14-15 October 2024

HUN-REN SZTAKI - Institute for Computer Science and Control, Budapest, Hungary and online

Researchers in digital sciences face tough ethical questions in their daily activity for which there aren't yet consensual answers among the research community. The ERCIM forum "Beyond Compliance" aims at advancing the discussion about those issues. The target audience is composed by researchers and Research Ethics Boards. The event consists of keynotes, presentations, tutorials and interactive sessions, and provides ample time for open discussions. Different outcomes are envisioned, including some which may be directed towards policy makers.

Program

Please note that all the times are CEST (Central European Summer Time), so UTC+2

Monday, October 14th

09:00-9:15 - Welcome and introduction

After Paris in 2022 followed by Porto in 2023, the third edition of the ERCIM Forum 'Beyond Compliance' was held in Budapest on October 14-15, 2024, at the HUN-REN Institute for Computer Science and Control. This year’s event, which took place both IRL and online, continued the discussion on the tough ethical issues faced by researchers in digital sciences. The scientific richness of these two days lay not only in the distinguished status of the speakers, but also in the wide range of cutting-edge topics covered. The diversity of contributions and the high caliber of Forum participants made it possible to explore digital issues from cultural, legal, (geo)political, historical, philosophical, and ethical perspectives.

09:15-10:15 – Opening keynote (Chair: Claude Kirchner)

The program of the first day was marked by two particularly brilliant keynotes, masterfully delivered by Julian Nida-Rümelin ("Beyond Compliance: Digital Humanism") and Milad Doueihi ("Beyond Intelligence: Imaginative Computing"). While the first speaker focused on tracing the philosophical origins of Digital Humanism and describing its challenges through animism and mechanistic reductionism, the second one offered a historical and literary analysis of what we now refer to as thinking machines. These presentations revisited classic AI debates, drawing on the ideas of Turing, Gödel, Wittgenstein, and earlier thinkers such as Leibniz and Butler. Both speakers explored the intersection of humanity and digital technology, advocating for human-centered approach to AI. The German philosopher emphasized the centrality of human authorship, while the American historian discussed the transformative effects of digital memory on culture and knowledge. Ethically, both thinkers stressed the importance of responsibility in the use of technology, emphasizing that education should guide digital transformation. They both called for critical reflection to safeguard cultural values and advocated for the preservation of human relationships, while reflecting on how digital culture reshapes knowledge transmission.

Julian Nida-Rümelin, LMU Munich and Humanistische Hoschule Berlin

Beyond Compliance: Digital Humanism Youtube

10:45-12:30 The making of regulations (Chair: Guenter Koch and Anna Ujlaki)

The first session dedicated to the making of regulations featured three researchers. Firstly, Melodena Stephens discussed the complexities of AI regulation, emphasizing the difficulty of implementing effective, intergenerational policies in a rapidly evolving technological landscape, and the need for a global, flexible, and ethically sound approach to address issues like human autonomy, security, and the future of jobs. Next, Anna Ujlaki critically reviews the political theory discourse on AI, focusing on its conceptual limitations, normative questions, and potential for addressing AI's integration into society, while highlighting the political risks and ethical dilemmas involved in AI regulation. Finally, Nikolaus Forgo discussed how, since the introduction of computers into public administration, lawmakers have repeatedly overestimated the short-term effects of new technologies while underestimating their long-term impacts, exemplified by the development of data protection laws and the recent AI Act.

Melodena Stephens,  Mohammed Bin Rashid School of Government, online
Approaching the Regulatory Event Horizon: Opportunities and Challenges Youtube

Anna Ujlaki, HUN-REN/Eötvös Loránd University
Regulating Artificial Intelligence: A Political Theory Perspective [slides] Youtube

Nikolaus Forgo, Vienna University, online
Giving an historical and critical overview on European attempts to regulate digitalisation

Panel discussion

 

14:00-15:30 Emerging topics (Chair: Claude Kirchner)

The rest of the day featured two additional sessions dedicated to emerging topics and cultural influences. Anatole Lécuyer opened the emerging topics session by discussing the paradoxical effects of virtual reality and metaverse technologies, highlighting their history and their growing impact on the population, particularly children and young adults, and the emerging ethical questions surrounding them. He explored psychological effects such as the sense of embodiment, agency, and the Proteus effect, which leads users to behave according to the stereotypes of their avatars, while also examining the potential harms and benefits of VR, from therapeutic uses to the risk of altering identity. This fascinating discussion was extended by the following speakers, who were present in person: Michele Barbier and Ferran Argelaguet. They presented a project exploring the ethical challenges of social interactions in the metaverse, focusing on issues such as harassment, privacy, and the legal status of avatars, with the goal of fostering empathy, improving safety tools, and addressing social and cultural concerns around digital identities and regulation. Finally, and in a slightly unconventional style, Jean-Bernard Stefani discussed the concept of "conviviality" from Illich to highlight the moral dilemmas in the digital world, including its ecological impact, surveillance capitalism, algorithmic discrimination, and digital divides, while arguing that these issues require a critical approach and a shift towards more human-centered and de-automated technologies.

Anatole Lécuyer, Inria Rennes/IRISA, online
Paradoxical effects of virtual reality [slides] Youtube

Justyna Swidrak, Michele Barbier, Mel Slater, Maria Sanchez-Vives, Maria Roussou, Eleni Toli, Ferran Argelaguet, IRL
Ethical Considerations of Social Interactions in the Metaverse [slides] Youtube

Jean-Bernard Stefani, inria, online
Taking Conviviality Seriously

Panel discussion

 

16:00-18:00 Cultural influences (Chair: Emma Beauxis-Aussalet)

Keynote: Milad Doueihi
Beyond Intelligence: Imaginative Computing. A Minority report. online Youtube

Finally, the last two remote speakers addressed the issue of cultural influences. Rockwell Clancy discussed the relationship between cultural responsiveness, psychological realism, and global AI ethics, highlighting the importance of understanding both the normative and empirical components of AI ethics, the challenges posed by cross-cultural contexts, and the need for culturally informed policy frameworks in AI development. Marianna Capasso presented a project on algorithmic discrimination, approaching it from a cross-cultural perspective. She highlighted how algorithmic discrimination should be understood in a nuanced way, using examples such as Amazon's CV screening system, which discriminated against women due to biased historical training data. She examined various forms of algorithmic discrimination, including indirect and statistical discrimination, and explored how culturally specific norms influence discriminatory behaviors.

Rockwell F. Clancy, Virginia Tech, online
Towards a culturally responsive, psychologically realist approach to global AI ethics Youtube

Marianna Capasso, Utrecht University, online
Algorithmic Discrimination in Hiring: A Cross-Cultural Perspective Youtube

Panel discussion

 

Tuesday, October 15th

09:00-10:30 Cooperative agents (Chair: Gabriel David)

The second day began with a session on cooperative agents. Elias Fernández Domingos discussed the importance of studying delegation to AI, explaining its issues and presenting a behavioral experiment where AI delegation improved coordination in a collective risk scenario, emphasizing the need for well-designed systems that maintain human agency while delegating tasks. Rebecca Stower explored ethical and psychological implications of human-robot interactions, focusing on errors in robot behavior, the impact on trust and risk-taking, and the challenges of balancing data privacy and user preferences in robot design. Finally, Michael Fisher discussed the importance of ensuring trustworthiness in autonomous systems, emphasizing the need for reliability, transparency, and ethical decision-making, while also addressing sustainability concerns related to both the environmental impact of AI and robotics, as well as the unnecessary deployment of technology.

Elias Fernández Domingos, VUB Brussels, online
Delegation to AI Agents Youtube

Rebecca Stower, KTH Royal Institute of Technology
Good Robots Don’t Do That: Making and Breaking Social Norms in Human-Robot Interaction

Michael Fisher, University of Manchester, online
Responsible Autonomy Youtube

Panel discussion

 

11:00-12:30 – Tutorial (Chair: Christos Alexakos)

At midday, the Forum participants had the opportunity to attend the Tutorial Training expertly delivered by Alexei Grinbaum. He emphasized the importance of operationalizing AI ethics and explained that ethics in AI should be viewed as a valuable framework rather than a constraint. The scientist addressed a range of ethical challenges, including security risks in robotics, and introduced tools to facilitate discussions between ethicists and engineers. He presented training courses featuring exercises on dilemmas and the evaluation of AI projects in sectors like healthcare. He also explored the issue of responsibility in personalized education, focusing on topics such as bias, fairness, and the role of teachers.

Alexei Grinbaum, CEA
Training in AI ethics: concepts, methods, exercises, problems Youtube

 

14:00-15:00 Unconference session (Chair: Emma Beauxis-Aussalet)

For the first time, the Forum left some space for an unconference session which allowed participants to discuss, in a more informal way, Open Science and Nobel Prize in Computer Science.

 

15:15-17:00 Democracy (Chair: Sylvain Petitjean)

Finally, the Forum concluded with a session dedicated to democracy that gave the floor to four speakers. Natali Helberger argued that AI is a powerful political tool that can either strengthen or undermine democracy, highlighting concerns about misinformation and the influence of big tech, while also recognizing AI's potential to enhance communication. Siddharth Peter de Souza discussed the creation of data governance norms, emphasizing the role of civil society and advocating for a pluralistic approach to regulation that includes marginalized voices. Attila Gyulai explored the impact of AI on democracy, questioning the assumption that democracy is solely about autonomy, and suggesting that a more realistic understanding of democracy, which accounts for representation, manipulation, and the constructed nature of preferences, is necessary to address the challenges AI poses. Finally, Bjorn Kleizen examined the level of trust citizens have in AI systems used by governments, exploring how transparency and public perceptions influence trust, and emphasizing the need for long-term strategies to maintain trust in AI applications.

Natali Helberger, University of Amsterdam, online
AI everywhere and anytime in the media. Will the AI Act save democracy?

Siddharth Peter de Souza, Tilburg University/Warwick University, online
Norm making around data governance: proposals for red lines Youtube

Attila Gyulai, HUN-REN
Misled by autonomy: AI and contemporary democratic challenges

Bjorn Kleizen, University of Antwerp, online
Do citizens trust trustworthy artificial intelligence? Examining the limitations of ethical AI measures in government

Panel discussion

17:00-17:30 Closing discussion (Chair: Claude Kirchner)

Venue

HUN-REN SZTAKI
1111 Budapest, Kende str. 13-17.
Coordinates: 47.493047, 19.058565
Google Map Link: HUN-REN SZTAKI

Organizing committee

The organizing committee of the Beyond Compliance Forum is the Digital Ethics Task Group of ERCIM:

  • Christos Alexakos (ISI/ATHENA RC, Greece)
  • Emma Beauxis-Aussalet (VU, Netherlands)
  • András Benczúr (HUN-REN SZTAKI, Hungary)
  • Gabriel David (INESC TEC, Portugal)
  • Claude Kirchner (CCNE and Inria, France)
  • Rebeka Kiss (HUN-REN Centre for Social Sciences, Hungary)
  • Guenter Koch (AARIT, Austria, and Humboldt Cosmos Multiversity, Spain)
  • Sylvain Petitjean (Inria, France)
  • Andreas Rauber (TU Wien, Austria)
  • Miklós Sebők (HUN-REN Centre for Social Sciences, Hungary)
  • Vera Sarkol (CWI, Netherlands)

Contact us at: This email address is being protected from spambots. You need JavaScript enabled to view it.

 

Italy CNR The Netherlands CWI Germany Fraunhofer Luxembourg FNR Greece FORTH Portugal INESC France INRIA Greece ISI Spain ITIS-UMA Norway NTNU Sweden RISE Austria SBA  Hungary SZTAKI Cyprus UCY Poland UWAW
© ERCIM | legal information

ERCIM is the European Partner of W3C