What happens when AI meets research ethics?

Xavier Roeseler
February 28, 2025

Most conferences today feature at least some discussion on AI, and our Customer Day 2025 was no different, with the relationship between AI and research ethics being one of the most requested topics from attendees. The influence of AI continues to grow across our working and daily lives, and is already showing the potential to cause a fundamental shift in the way we do our research.

Against this backdrop, we invited Stephanie Armstrong (University of Lincoln), Filipa Vance (University of Bath), Eleni Flack-Davison (University of the Witwatersrand), and Rachel Gibson (University of Salford) to join a panel discussion on AI in research ethics, chaired by our Head of Product, Vydehi Chinta. Below, we’ve set out some of the key highlights from this discussion:

  • Different institutions are taking a variety of approaches to address this novel challenge
  • The fast-paced nature of development in AI presents a challenge to universities in keeping up
  • The use of AI when personal data is involved raises considerable data protection concerns
  • There is a clear need to balance oversight of AI with the considerable potential it has to enhance innovation
  • While AI brings many opportunities, we should never overlook the vital role of humans in research ethics  

The role of ethics committees

Traditionally, ethics committees have focused on safeguarding humans and animals in research, but the growing presence of AI in research creates both new opportunities and risks, with different universities taking different approaches.  

During the discussion, our panellists shared a range of approaches universities can take to manage AI considerations within the context of ethics committees:

  • Establishing a specific ethics committee for reviewing the ethical implications of research conducted using AI tools
  • Creating AI policy guidelines for research office staff and researchers
  • Including AI experts on each committee to ensure oversight even when part of a larger project
  • Treating AI as part of the broader ethics committee role, guiding applications to consider ethical dilemmas posed by their research, methods and tools

As our panellists’ accounts showed, there is not yet a one-size-fits-all approach, and different institutions are charting their own courses through this novel and changing landscape. However, it’s clear that AI is forming a greater part of universities’ considerations as its presence grows across the research space.

Are institutions moving quickly enough?

In light of these differing approaches, and the rapid, seemingly daily, development in AI technology, the question inevitably arises as to whether institutions are moving quickly enough to plug these gaps in their existing processes. Our panellists believed that progress could be improved significantly through harnessing universities’ own internal subject matter expertise, with AI experts from across various faculties being brought onboard to help draft the necessary frameworks and policies.

AI is already being used to improve so many aspects of research at universities. However, institutions have not yet settled on the best way to ensure this support does not hinder our ability to learn from our mistakes. As a panellist shared: “we need to remain confident in our own individual abilities”.

Data protection risks

AI systems rely on vast amounts of data to function and improve, often collecting personal information from users to enhance accuracy and make decisions. Our discussion highlighted the growing concern across institutions globally over inputting personal data from research into LLMs such as ChatGPT, and the inability to know what happens to this data after it’s been inputted.

In order to mitigate this risk, our panellists agreed that it is incumbent on ethics committees to conduct continuous risk assessments and advise their institutions accordingly, with a particular eye on novel/future technologies.

Balancing innovation and oversight

To avoid a reactionary rejection of AI, it is necessary to consider how to balance the limitless potential of AI to result in a responsible, transparent, and equitable use of the technology. For example, should all research involving any use of AI go through extensive committee reviews or only high-risk projects?

For our panellists, the watchword was “proportionality” with the level of oversight required corresponding to the actual nature of the work itself.

As our panel discussed, the role of universities in reviewing projects is to be “comprehensive not preventative” – we all still want research to be able to progress and lead to a brighter future for everyone – we just want to check that we’re being safe and responsible while doing so.

The continuing importance of humanity in ethics committees

While AI continues to advance, it remains crucial to acknowledge the importance of human judgement and, well, humanity in research ethics. Human involvement remains essential to ensuring the appropriate consideration of ethical principles.

Regularly revisiting ethical frameworks and engaging in ongoing dialogue between researchers, regulators, and the public will be essential to keeping pace with emerging AI technologies. This will also ensure development continues to align with our own ethical values. This is how we can develop AI in a beneficial way for humanity while continuing to protect human dignity, value, and genius.

Empower Your Research Today

Insights

View all