Previous editions: 2024 | 2023 | 2022
ERCIM Forum Beyond Compliance 2025
29–31 October 2025 – Rennes, France
Deadline for registrations: 21 October 2025
> Subscribe to ERCIM digital ethics mailing list (~3 email/year)
The 4th edition of the ERCIM Forum Beyond Compliance will take place from 29 to 31 October 2025 at the Inria Centre at Rennes University (IRISA) in Rennes, France. This international event focuses on research ethics in the digital age, offering a space for thought-provoking dialogue and collaborative exploration.
This year’s edition will address five pivotal themes:
- Security in the digital society
- Geopolitics of digital ethics, including infrastructure and digital sovereignty
- Data altruism and the promotion of open academic resources
- Generative AI in research, teaching, and publishing
- AI’s impact on behavior & cognition
The forum is partially co-located with the European Informatics Leaders Summit (ECSS’25) of Informatics Europe, and will feature a joint session in co-organised by the digital ethics working groups of both ERCIM and Informatics Europe. Participants are welcome to also attend ECSS'25 and its ethics workshop.
Information on travel and accommodation can be found here. Attendance is in person (no online broadcast, at least for this year). You can also revisit last year's talks and videos here.
Wednesday 29-10-2025
Addressing ethical challenges with security in the digital society
14:30–16:30 — Joint Round Table with the Ethics WG of Informatics Europe — Amphitheatre (Building G)
This session will address the responsible development and application of information systems, encompassing areas such as accountability, misinformation, ethical awareness, and best practices for digital security.
Co-chaired by Covadonga Rodrigo.
Speakers:
- Marco Gercke — From Lab to Law: Compliance Journeys of High-Risk AI Development and AP4AI Self-Assessments
- Tatjana Welzer — Ethics and Accountability
- Kristina Lapin — Raising Ethical Awareness to Combat Dark Patterns
- Rafael Pastor — GenAI: deepfakes, misinformation and risks in the digital society
- Mirela Riveni — Ethical issues in decision-making with AI
Keynote
16:30–17:30 — Catherine Tessier — "Artificial intelligence”, research and education: some (new) ethical issues — Amphitheatre (Building G)
Thursday 30-10-2025
Strategy meeting
09:00–9:30 — Open discussion of future plans for ERCIM Digital Ethics WG, and for supporting digital ethics in academia — Amphitheatre (Building G)
Tutorial
09:30–11:00 — Alexei Grinbaum — AI impact on human behavior and cognition — Amphitheatre (Building G)
11:00–11:30 — Coffee Break
Data altruism and open academic resources
11:30–12:30 — Talks & Discussion — Amphitheatre (Building G)
Academics have a duty to deliver digital resources to the public, to inform citizens and policy, and foster fair innovation. This session will discuss practical issues with establishing data commons that enable open science and innovation: e.g., issues with privacy, trust, and regulatory frameworks.
Speakers:
- Roberto Dicosmo — No Science Without Source: Collecting, Preserving and Sharing Software in a Risky World
- Bertil Egger Beck — Perspectives from the EU Commission
12:30–13:30 — Lunch Buffet
Keynote
13:30–14:30 — Afonso Seixas-Nunes — A Lawyer among Engineers! Autonomous Systems and ethical and legal questions which remain to be answered — Salle Markov (Building G)
14:30–15:00 — Coffee Break
Geopolitics of digital ethics in academia
15:00–17:00 — Talks & Discussion — Salle Markov (Building G)
Recent geopolitical changes have impacted academia’s funding policy, and the implementation of research on digital ethics or with digital ethics implications. In this changing geopolitical landscape, digital sovereignty is ever more important. Dependencies on infrastructure and funding may constrain how digital ethics is implemented in academic research and practices (e.g., to ensure privacy, inclusivity, transparency, sustainability). This session will discuss the (dis)alignment of political, economical, and academic interests that underlie:
- Digital infrastructure in academia, and its dependency on the big tech industry
- University policy on digital ethics
- Funding strategies for research on digital ethics, or with digital ethics implications
Speakers:
- Eric Germain — What is behind ‘apolitical ethics’ and how academia can remain sovereign?
- Kavé Salamatian — Collaborative multidisciplinary research in cybersecurity in a changing geopolitical context: news from the front
- Petru Dimitriu — The emergence of international norms on digital ethics
- Domagoj Juricic
Friday 31-10-2025
GenAI in research and teaching
09:30–11:30 — Talks — Salle Markov (Building G)
Practices in academia have to rapidly adapt to generative AI. As educators, academics must wonder how GenAI impacts human cognitive skills. In the long-term, the issue is about which non-essential skills can we delegate to GenAI, and which essential skills must we not. This session will explore strategies for developing students’ essential skills for writing, reading, coding, ideating, and critical thinking. As this needs more than establishing fraud policies, we will revisit the design of learning goals, assessment methods, and learning activities.
Speakers:
- Michał Wieczorek — What is (un)ethical about educational AI?
- Laurynas Adomaitis — An Oracle or an Intern? Using GenAI in Research
11:30–11:45 — Break
Keynote
11:45–12:30 — Mihalis Kritikos — Digital ethics in EU-funded projects — Salle Markov (Building G)
Organising Committee
Christos Alexakos (ISI/ATHENA RC, Greece)
Emma Beauxis-Aussalet (VU, Netherlands)
Gabriel David (INESC TEC, Portugal)
Alexei Grinbaum (CCNEN and CEA, France)
Claude Kirchner (CCNEN and Inria, France)
Guenter Koch (AARIT, Austria & Humboldt Cosmos Multiversity, Spain)
Anaelle Martin (CCNEN, France)
Sylvain Petitjean (Inria, France)
Vera Sarkol (CWI, Netherlands)
Speakers & Abstracts
Prof. Dr. Marco Gercke - http://drgercke.de/
From Lab to Law: Compliance Journeys of High-Risk AI Development and AP4AI Self-Assessments
Abstract:
The EU AI Act has introduced a new compliance regime that is now firmly institutionalized, reshaping how artificial intelligence is conceived, developed, and deployed across Europe. For high-risk AI systems in particular, the regulation sets out rigorous requirements that directly influence the design and innovation processes within EU-funded projects and beyond. These obligations create both opportunities for more trustworthy AI and practical challenges for developers, regulators, and end-users alike. Drawing on concrete experiences from European high-risk AI initiatives, this keynote highlights the realities of operationalizing compliance. It explores how AP4AI self-assessments can serve as a practical tool to navigate complexity, align with the EU AI Act, and foster accountability in AI development.
Note: The Accountability Principles for AI (AP4AI) Project develops solutions to assess, review and safeguard the accountability of AI usage by internal security practitioners. https://www.ap4ai.eu/about
Bio:
Prof. Dr. Marco Gercke is an entrepreneur, scientist and consultant. His first focus area is Cybersecurity. With more than 1000 speeches in over 100 countries and over 100 scientific publications, Prof. Gercke is one of the world’s leading experts in the field of cybersecurity and cybercrime. He is the founder and director of the Cybercrime Research Institute, an independent research institute and think tank based in Cologne. He advises governments, organizations and large enterprises around the world and advises them on strategic, political and legal issues in the field of cybersecurity. The main focus of his work is the development of innovative approaches to tackling a problem that has developed into a central problem for governments and businesses in recent years – Cybercrime. Over the past 15 years, he has worked in over 100 countries across Europe, Asia, Africa, the Pacific and Latin America. As a respected and experienced speaker, Prof. Gercke offers excellent and useful insider knowledge on the subject of cybersecurity due to his many years of activity and internal view. His lectures are clearly structured, very informative and include practical examples.
Michał Wieczorek
What is (un)ethical about educational AI?
Abstract:
This paper builds on the results of our recent systematic literature review to discuss the ethical implications of using AI in primary and secondary education. Although recent advances in AI have led to increased interest in its use in education, discussions about the ethical implications of this new development are occurring in different disciplinary circles. As such, they reflect varied understandings of ethics that make it challenging to consolidate the debate.
I highlight the seventeen categories of ethical implications of educational AI identified in the review which were grouped into four kinds of opportunities and thirteen types of concerns. The former include, among others, the potential reduction of educational inequalities or the facilitation of teachers’ work, while the latter range from fairness and privacy issues to concerns about the influence of private companies or low accountability of the systems.
I then build on this discussion and our interactions with readers, audiences and reviewers to highlight the conflicting understandings of ethics that can be observed in the current debate. Although all of the themes highlighted in the review have normative implications – i.e., they influence our values and the practices through which such values are enacted – they are not equally recognised as such by different communities seeking to engage with the ethics of educational AI. For example, we observed that more computationally-minded readers tend to overly or exclusively focus on issues such as fairness, accuracy, transparency, explainability or privacy, while others are hesitant to consider, e.g., the impact on teaching practices or the role of the teachers as ethical – preferring instead to discuss them under the label of social or pedagogical concerns.
Consequently, I argue that such narrow views of ethics are limiting and do not enable us to capture the wide variety of ethical impacts introduced by educational AI. I call for increasing research on the less obvious normative implication of the technology and sketch an agenda for such work.
Bio:
Dr. Michał Wieczorek is an Ad Astra Fellow – Assistant Professor in AI-Driven Educational Innovation in the School of Education, University College Dublin. As a philosopher, he studies how new technologies impact the values, goals and practices of education. He has expertise in applied ethics, philosophy of education, philosophy of technology and anticipatory research, and he specialises in the thought of John Dewey. Before joining UCD he was a Government of Ireland Postdoctoral Fellow at Dublin City University where he researched the ethical issues introduced by the use of AI in compulsory schooling. He did his PhD at DCU as part of the EU-funded PROTECT project. His research dealt with the influence of self-tracking technologies (e.g., Fitbits, Apple Watches) on users’ habits and self-knowledge.
Ethics and Accountability
Abstract:
Accountability in the digital age is not something we deal with when something goes wrong but rather a requirement that we must think about before, during, and after the selection of solutions and their implementation. Accountability does not mean blaming others but taking responsibility for making decisions and ensuring a safe and transparent online environment. Accountability is the awareness that is the only constant in a dynamic digital age, in which we must understand each other and ensure the development of contextual instruments, guidelines, and other policies. In doing so, we create awareness of responsibility in the global community with various stakeholders, impart knowledge about responsibility, and research and develop instruments for responsibility.
We will focus on accountability in connection with artificial intelligence, emphasizing ethics, including cultural awareness and professional codes of ethics. These principles govern the behavior of a person or group in a business environment.
Like values, professional ethics determine the rules of how a person should behave towards others and institutions in the professional environment. These rules are presented as Professional Codes of Ethics for individual fields and maintain the highest standards of professional conduct. Their common characteristics are avoiding conflicts of interest, violations of confidentiality and privacy, and the law, providing knowledge for advancing technology, prudent use of information and maintaining the integrity of systems, and transferring fundamental ethical principles to computer professional activity. Of course, the rules are not an algorithm for solving ethical problems. They are only a basis for ethical decision-making and a demonstration of responsibility for supporting the public good.
Bio:Tatjana Welzer Družovec is a researcher and a full professor at the University of Maribor, Faculty of Electrical Engineering and Computer Science. She is the head of the Data Technology Laboratory. Her research interests include cybersecurity including ethics, cultural and human factors of IT and cybersecurity, and intercultural communication. She is the national delegate for IFIP TC 11 and a member of the executive board of Slovenian Society Informatika. She has participated in numerous national and international research projects. Most international projects have been funded by the EC through various Horizon 2020 and Erasmus+ programs. She was a coordinator of the European University Alliance ATHENA at the University of Maribor and is still involved in its activities.
Her bibliography contains over 800 bibliographic items published in various scientific journals, including top JCR IF publications. She has published chapters in several books and has participated in numerous international conferences. She has been and is a member of the committees of many international conferences and steering committees. With her team, she has organized and co-organized over 20 international conferences in Slovenia, and many invited events at various conferences worldwide. For her work she received the title of Congress Ambassador of Slovenia in 2019.
Abstract:
Ethical design ensures users’ well-being, privacy, and autonomy in making informed decisions. Usability and accessibility design principles ensure ethics because they require essential aspects to be visible, understandable, controllable, recognizable, etc. Dark patterns intentionally violate these principles, making it possible to manipulate consumers into taking actions that do not correspond to their preferences. Dark patterns are aimed at modifying the underlying choice architecture. They alter decision space or manipulate the information flow to benefit the service providers rather than users. While these designs work in the short term, the companies extract profits, harvest data, and limit customer choice before users face consequences.
While maintaining professional ethics is the norm in other disciplines, UX design still requires more efforts to raise awareness among users, designers, and stakeholders. The presentation will focus on categorizations of dark patterns that distinguish them according to their implementation methods and consequences on users ' well-being. Further, the factors raising the awareness of designers, stakeholders, and end-users will be reviewed. We will provide an overview of the legal regulations that are obligatory for stakeholders. A way to raise prospective designers’ awareness will be presented on the example of how ethics topics are taught for the Vilnius University Software Engineering students. Finally, examples of tools for raising users’ awareness of ethics breaches will be discussed.
Bio:Kristina Lapin is an associate professor at Vilnius University, Faculty of Mathematics and Informatics, Department of Computer Science. She is the chair of the Board of the Faculty of Mathematics and Informatics. She is also the chair of the Software Engineering Bachelor's Study Program Committee. She teaches human-computer interaction for bachelors and User Experience Engineering for masters in Software Engineering and Computer Science students. She is an author of the Human-Computer Interaction textbook for Lithuanian students. Research interests include human-computer interaction, balancing of usability and security, and design ethics. She participated in national and international research projects in educational, aeronautics, virtual worlds, and cybersecurity thematic areas.
Misinformation and risks in the digital society: Ethical use of IA and solutions
Abstract:
The use of generative artificial intelligence tools is spreading and becoming more widespread in our society, with particular relevance in content generation and in their use by teenagers on social media. The latent dangers of misinformation have grown exponentially due to the massive use of social media, and it is in these spaces where the use of generative AI as a fundamental tool for misinformation has increased. This speech aims to present specific cases that demonstrate the application of this technology and its impact on the radicalization of opinions and extremism. In addition, these same tools are used illegally on these networks, leading to crimes of hate speech, bullying, harassment of young women, and even sexual blackmail. Some results from the project “Analysis of mobile applications from a data protection perspective: Cyber protection and cyber risks to citizens' information” will be presented, along with how to use AI to detect these situations and take appropriate action.
Rafael Pastor is a professor at UNED. He served as Director of Technological Innovation at the UNED (responsible for developing the aLF learning platform and technological innovation processes) for five years (2004-2009) and also as Director of the UNED Center for Innovation and Technological Development from 2009 to 2011, where he was responsible for managing the UNED virtual campus and developing the aLF learning platform. He is currently Director of the ETSI School of Computer Science. He has directed and participated in several teaching innovation projects, summer courses, and continuing education programs. Throughout his scientific career as a researcher, he has participated in more than 20 R&D projects funded by public calls for proposals (regional, national, and international), some of which are particularly relevant to companies and/or administrations at the international level. He has also participated as a speaker and active member in nearly 60 international/national conferences, indexed in impact lists such as CORE (ERA), DBLP, and IEEE Explorer. His research experience is also reflected in more than 70 publications in international journals, 60 of which have a JCR/SJR impact factor, with 45 of them indexed in the Journal Citation Report (JCR). Additionally, he has been a member of several international scientific societies, including the IEEE (Education Society), where he holds the status of Senior Member. He is a collaborator/advisor to the AEPD (Spanish Data Protection Agency), through his participation in the advisory council “Espacio de Estudio sobre Inteligencia Artificial” (Study Space on Artificial Intelligence), and a member of the P2834 working group “Standard for Secure and Trusted Learning Systems”. He is one of eight Spanish researchers to hold an International Chair in Cybersecurity, funded by EU PTR funds and awarded through a competitive and public call by the National Cybersecurity Institute (INCIBE).
A Lawyer among Engineers! Autonomous Systems and ethical and legal questions which remain to be answered
Abstract:
The talk will focus on the moral dilemma of human control in autonomous systems and the moral responsibility of those who design them, as well as the legal implications.
Afonso Seixas-Nunes, SJ, was born in Porto, Portugal, in 1973. He joined the Portuguese Province of the Society of (Jesuits) in 1998, after he graduated in Law by the Portuguese Catholic University (Porto), and was ordained priest in 2010. Afonso, as a Jesuit, did his degree in Philosophy (Licence) by the Portuguese Catholic University (Braga) for which he was awarded the Prize Pe Vitorio de Sousa Alves, and he has a degree in Theology by the Pontificia Universita Gregoriana, Italy). After his theological studies, Afonso went to London and has a Master’s in International Law and Human Rights by the London School of Economics and Political Science (LSE -UK). In early 2019, Afonso completed his doctoral thesis in International Humanitarian Law at the School of Law of the University of Essex (UK), entitled The Legitimacy and Accountability for the Deployment of Autonomous Weapon System under International Humanitarian Law, then published by CUP, 2022. In September 2018, Afonso became a post-doc research fellow of the Oxford Institute for Ethics, Law and Armed Conflict (ELAC – University of Oxford) directed by Professor Dapo Akande at the Blavatnik School of Government. In August 2021, Afonso joined Saint Louis University Law School as an Associate Professor Public International Law and Laws of Armed Conflict. His research focus AI technologies and laws of armed conflict, and the intersection of Outer Space Law and private corporations.