If you’re a qualitative researcher, chances are high that you’ve either used generative Artificial Intelligence tools (GenAI), considered using them, or at the very least, discussed them with your colleagues. Whether you’re still sitting on the fence or already deep into experimenting with these tools, one thing is clear: GenAI is making its presence felt in the world of qualitative research.
We recently had the opportunity watched a thought-provoking presentation from the 2024 annual SRA conference by Christina Silver from the University of Surrey and Steve Wright from University of Central Lancashire titled “The Good, the Bad and the Ugly of AI in Qualitative Analysis“
In this blog, we summarise some of the key insights from the presentation and share some practical tips on how to critically evaluate if and how GenAI should be part of your qualitative research process.
Recent developments in the field of GenAI and qualitative research
Conversations with clients and online debates suggest that qualitative researchers tend to fall into largely three camps on this topic:
- the advocates who are using AI in qualitative research and analysis
- the critics who oppose its usage
- those who are unaware that these tools exist.
In recent years, the landscape has changed dramatically. The launch of tools like ChatGPT by OpenAI and the rapid development of GenAI applications have begun reshaping how we approach qualitative analysis. We’re seeing this happen in three main ways:
- AI-driven apps designed for specific tasks in research workflows
- Chatbots (e.g., ChatGPT, Claude, Gemini and Microsoft Copilot)
- Integrations with qualitative data software like ATLAS.ti and MAXQDA
The presentation noted that the use of computational linguistics and AI in language analysis is not entirely new. Researchers have been using such tools for statistical modeling, data mining, sentiment analysis, and content analysis, for example, analysing how different newspapers use language to describe various ethnic groups or analysing staff feedback from surveys.
Qualitative softwares like NVivo, Quirkos, ATLAS.ti, and MAXQDA have been assisting researchers in the interpretive analysis of qualitative data since the mid-1980s. Researchers use these tools to identify themes, construct narratives, and support meaning-making, a process that they previously carried out using spreadsheets.
While the tools are evolving, the right response isn’t to embrace or reject them blindly. It is to critically engage with the discourse around their use.
Using GenAI in qualitative research
Currently, developers market and apply GenAI tools across five key activities in the qualitative research workflow:
- Generating – Creating outputs such as themes, slides or even interview questions.
- Speech-to-Text – Automating transcription with increasing accuracy and speed (e.g., Transana, Quirkos).
- Conversing – Engaging with interview data via chat interfaces to compare responses and generate queries.
- Summarising – Producing high-level summaries of findings or data to support write-ups.
- Labelling – Assisting with tagging and coding of qualitative data.
- Generating – Creating outputs such as themes, slides, or even interview questions.
Developers promote these capabilities with ambitious promises:
- CoLoop: “Discover hidden themes” and “write slides faster”
- ATLAS.ti: Offers “light-speed insights”
- MAXQDA: Takes a more measured approach, promising to “streamline research journeys” and “support deeper researcher-led analysis”
However, each activity and tool presents its own challenges and implications. Therefore, it is crucial to analyse them through the lens of quality, time and cost.
What to consider before using GenAI in qualitative research
1. Teh lack of accurate contextual knowledge and representation of diverse languages and perspectives
Without a doubt, GenAI-powered transcribing platforms have become increasingly accurate, allowing researchers to save considerable time. However, the quality of the transcriptions can vary depending on the speaker’s language, accent and dialect. Researchers frequently interact with individuals from diverse backgrounds who speak less commonly spoken languages, so the transcription may not be up to the mark. Moreover, it might run the risk of lacking contextual information that human speakers of those languages could offer. As a result, it continues to be extremely important that people double checkAI generated transcription. People need to verify accuracy, gather contextual details, and make sense of the transcribed material.
2. Generating negative environmental impact
That the high efficiency promised by these tools comes at the cost of the environmental impact. GenAI models require huge amounts of computational power for their training and fine-tuning, resulting in high energy consumption of natural resources such as water and electricity. To put into perspective, a simple ChatGPT query consumes about five times more electricity compared to a web search. Therefore, before using GenAI for your research it is important to consider its negative impact on the environment to make an informed decision.
3. Limited transparency and privacy offered by conversational AI tools
One of the emerging uses of AI in qualitative research is through chat-based interfaces that allow researchers to “converse” with their data. This means that these tools can compare data across datasets and generate insights. While these tools have an intuitive interface and can quickly answer your questions, they also create a virtual “black box” due to limited transparency about how they process the data and draw conclusions.
Before using them, ask yourself some key ethical questions — what kinds of data were used to train them, whether they are replicating other researchers’ work without giving them credit, and if the information generated has an embedded bias towards Eurocentric worldviews. Additionally, if you are working with confidential data, it is important to also check the privacy policy of these tools to ensure you are not leaking any sensitive information of your interviewees.
4. Inconsistency in grasping nuance in qualitative research
GenAI summaries can be useful in assisting the researcher’s work by condensing large volumes of data into digestible information. In theory, these tools act as algorithmic extensions of the researcher’s intent. However, their reliability remains inconsistent.
While they can capture surface-level content well, they may falter in grasping nuance, tone or significance. There have been instances of tools making basic factual errors, such as miscounting letters in a word, raising questions about their readiness to handle more complex qualitative analysis. Ultimately, the ability to determine what a finding means and why it matters remains a task best handled by researchers themselves.
5. Inability to gain trust and build rapport with interviewees
Some platforms now offer AI-driven interviewing tools, such as Ailyse, which can conduct interviews on your behalf. It allows researchers to collect data of a large sample size within a short period of time. However, it has raised critical concerns about the quality and integrity of data collection. Building rapport, understanding cues, and creating a safe space for interviewees are core aspects of qualitative interviewing, they are difficult, if not impossible, to replicate with AI. Ethical considerations around consent, empathy, and context come into play, as does the issue of interpretation. A machine can ask questions, but it cannot truly listen or respond with human understanding. Thus, while AI may assist with parts of the interview process, it cannot, and should not, replace the researcher in roles that require sensitivity and human connection.
Conclusion
Just as you would with any research tool or method, the recommendation is to thoroughly evaluate any AI tool before integrating it into your qualitative research process. While these tools can offer significant time savings, their use must be guided by ethical reflection and a commitment to maintaining the quality and integrity of your research. Ask yourself critical questions about why you’re using the tool, how it might influence your findings, and whether it truly adds value to your work.
It’s not about fearing AI. It’s about engaging with it thoughtfully. Take the time to understand how a tool is developed, who developed it, and how it functions. Critically exploring these tools allows you to assess their potential benefits without compromising the essence of your research or the depth of your analysis.
This article draws on:
- Christina Silver and Steve Wright, The Good, the Bad and the Ugly of AI in Qualitative Analysis (2024), SRA Annual Conference
- Dr. Assad Abbas, Western Bias in AI: Why Global Perspectives Are Missing (2025), Unite AI
- Christina Silver, AI myths and the use of AI-Assisted tools for Qualitative Analysis, qualitative data analysis services
- Alokya Kanungo, The Green Dilemma: Can AI Fulfil Its Potential Without Harming the Environment? (2023), Earth.org
- Adam Zewe, Explained: Generative AI’s environmental impact (2025), MIT News
- Sanksshep Mahendra, Dangers of AI – Lack of Transparency (2023), Artificial Intelligence +
- Christine Ortiz, The Silent Hand of Eurocentrism in AI (2023), LinkedIn
- Julia Ligteringen, AI Can’t Handle Ambiguity – And That’s a Big Problem for Your Research, Leximancer
- David Bleines, Camille Stengel, and Natalie Lai, Can AI Drive Innovation in Qualitative Research? Here’s What We’ve Learnt (2024), Nesta