Abstract

Abstract 2022

The ERCIM Ethics Working Group is organizing a workshop/seminar on October 17/18th, 2022, targeted at researchers and Research Ethics Boards (REBs). The event will consist of keynotes, presentations, tutorials and interactive sessions, and will provide ample time for open discussions. Different outcomes are envisioned, including some which may be directed towards policy makers.

The responsible and ethical conduct of research in science and engineering is critical for excellence, socially relevant impact and public trust. The practice of ethical and regulatory review of research grew out of very harmful so-called medical research involving human subjects that came to light in the early 1970s, as the widespread adoption of ethical principles in the conduct of human studies were slow to develop up to that point despite the Nuremberg Code (1947). Institutional Review Boards and the Belmont Report (1979) then came out of the US, remaining today an essential reference along with the Declaration of Helsinki (1964) for Research Ethics Boards (REBs), the latter ensuring appropriate steps are taken to protect the rights and welfare of human subjects involved in research studies around the world. Regulations in the domain of biomedical research have been fairly effective in balancing the progress of research with the protection of research subjects, and have largely contributed to create a culture of ethical awareness around research involving human subjects.

Reflecting upon the ethical issues they faced, including some that existing oversight authorities may have been unaware of or that went beyond their usual purview, a group of computer science researchers described how the guidelines of the Belmont report can be usefully applied in fields related to research involving information and communication technology. The resulting Menlo Report (2012) adapted the three basic ethical principles established in the Belmont report - Respect for Persons/Autonomy, Beneficence, and Justice - to the context of computer security research, and added a fourth principle, Respect for Law and Public Interest. A key input of the Menlo Report is the need to perform a comprehensive stakeholder analysis to properly apply any of those principles in the complex setting of computer science and ICT research; while not the direct subjects of research, secondary stakeholders may also be harmed and may also have the right to autonomy and justice.

Digital and connected technologies have increasingly become intertwined with our individual and collective daily lives, mediating our communications, profiling our behaviours, changing how we think, live and act, and presenting new tensions that interrogate the applications of these guiding ethical principles. Research in digital sciences -- including computer science, automatic control, robotics and applied mathematics -- raises many new ethical challenges resulting from interactions between humans and deep tech. But in spite of previous efforts, researchers in these fields are often ill-equipped (and sometimes reluctant) to deal with potential ethical implications of their work and the current research framework is ill-suited to oversee the unique ethical risks emerging in association with such fields as data science and AI research. Typical questions in this context include:

About ethics in the era of big data and autonomous systems:

  • What are the biggest ethical challenges posed by new, data-intensive research areas? How to proceed when both benefit and risk are typically intangible?
  • How can one circumscribe such issues as bias and the prospect of societal harm that increasingly plague AI research? Is every line of research in digital sciences worth pursuing and every piece of digital tech worth building in view of potentially very harmful foreseen applications and use cases?
  • How can the ethical risks to participants and other stakeholders be predicted and controlled as digital technologies change at such a fast pace?
  • How to guarantee ethical considerations observance beyond ex-ante subject consent or REBs
  • approval in computer science research projects, where researcher-subject relationships tend to be disconnected and dispersed, often involve a proliferation of data sources, and there is an inherent overlap between research and operations?
  • How can crowdsourcing/micro tasking services, often an integral part of many data-intensive research programs, be used in a responsible and ethical way?
  • Who is responsible if the machine malfunctions: the designer (who may be a researcher), the owner of the data, the owner of the system, its user, or perhaps the system itself?
  • What is the researcher's responsibility for the behaviors and actions of a robot or autonomous system that he or she has contributed to design?

About best practices, training and awareness:

  • What are the best practices researchers in digital sciences can follow to avoid unintended consequences of their work?
  • How to raise awareness at an early stage of the career and incentivise researchers to be more ethically prepared? Do they need to be made accountable somehow?
  • Is open science really the ideal context for ethical and responsible research to flourish?
  • How can ethical considerations be incorporated at the earliest stages of a research project?
  • How to foster regular and open discussions on questions of ethics and responsibility in research communities, in particular data-driven research communities?
  • Is there such a thing as a reproducibility crisis in CS research?

About ethical review and regulation:

  • How can researchers and REBs work to promote public trust in research in digital sciences, as IRBs have in human subject research?
  • Should we reconsider what it means to be a REB in the digital age? How to adapt REB requirements to the specific context of research in digital sciences?
  • Do we have a clear standard of what is ethically permissible in research in digital sciences? Can deceptive techniques be used and to what extent?
  • Is the requirement introduced by major computer science research conferences that submissions have to include an Impact Statement (describing ethical aspects of the work and future societal consequences) and to undergo review by an internal Ethics Committee an adequate response?
  • How does it articulate with the work of REBs?
  • Will or should part of computer science research ethics eventually move from soft law oversight to binding regulations? What parallel can be drawn or lessons be learned from historical developments in the field of medical research?
  • Should REBs merely review the ethics of research in digital sciences itself or also consider the broader research enterprise and context within which that research is situated?
  • The General Data Protection Regulation (GDPR), which gives a high level of protection to individuals’ rights and personal data, provides for a framework to enable derogations (to be introduced by EU or Member State law) from these rights when scientific research is concerned.
  • Are such derogations adequate and flexible enough to encompass the whole nature, process and demands of scientific research in digital sciences? Conversely, can a full implementation of these derogations render the research unethical and not in line with individuals’ interests?

About innovation and working with other fields:

  • In terms of research ethics, what are the specifics of working in an interdisciplinary setting, in particular in digital health research? Are best practices, processes and tools of the fields under consideration easy to conciliate?
  • Is doing research in digital sciences with industrial partners a long and bumpy road?
  • In a world where many researchers found private companies, how to regulate situations where such companies may be the conduit of (data-intensive) research outside the ethical review of the researchers' home institutions?
  • Should formal ethical review extend to research-led innovation? How will research -- that has to comply with increasingly complex ethical questions and procedures -- be affected if a clear distinction is made with innovation?
  • How to conciliate the standard practice of digital platform innovation using subjects who are given improved or enhanced free services with the failing of such practices to be considered a voluntary participation element of informed consent according to the Menlo Report?

If you are interested to participate or if you would like to receive more information, please contact the organisers at This email address is being protected from spambots. You need JavaScript enabled to view it.

Home