AI is everywhere; is it in qualitative research, too? Potentials, Pitfalls, and Open-Source Solutions (Part 2)

In this second part of our exploration into AI’s role in qualitative research (you can find part one here), we’ll focus on the concerns and issus that come with its growing presence as well as some tips for utilizing AI tools like Taguette, Weft QDA, and  AcademiaOS and avoid the pitfalls. 

Concerns and Pitfalls:

One concern is the potential loss of human judgment. Sure, AI can code and analyze data, but it can’t truly understand the subtleties and emotional undertones of human experiences. Qualitative research relies on context and interpretation, and there’s a real risk that AI might oversimplify the nuances (Denecke et al., 2023; Arbelaez Ossa et al., 2024). Also,some researchers fear that by automating tasks like coding, we might be reducing our own engagement with the data (Schmitt, 2024; Zhang et al., 2024).

Then, there’s the issue of bias. AI models are trained on existing datasets, which means they can reflect the biases in those datasets. When applied to qualitative research, these biases can skew the analysis, particularly in fields like sociology or anthropology, where cultural sensitivity is critical (Denecke et al., 2023; Kooli, 2023). The transparency of AI systems is also a challenge. Many AI tools are “black boxes,” meaning researchers don’t fully understand how the algorithms arrive at certain conclusions (Arbelaez Ossa et al., 2024) and this can undermine the trust and integrity of qualitative research.

There’s also a concern about de-skilling. As AI takes over tasks like data coding and pattern recognition, researchers may lose the opportunity to develop essential analytical skills (Christou, 2023; Arbelaez Ossa et al., 2024), and early-career researchers might miss out on learning experiences that deepen their understanding of the field. There is also a more considerable looming risk that AI could standardize research outcomes. Since AI tends to generate results based on patterns, it might reinforce existing trends rather than offering new or challenging perspectives (Arbelaez Ossa et al., 2024) that would be detrimental to the intellectual landscape of qualitative research.

Finally, there’s the problem of data privacy. AI systems process data without specific concern or clarity about use or storage and this makes privacy and consent critical concerns (Denecke et al., 2023). This and probably many more concerns are important to consider when it comes to utilizing AI for qualitative research. 

Some Tips for Balancing the Use of AI

  AI as a Complementary Tool 

AI works best when it’s used as a complement to human expertise, not a replacement. Tasks like coding and pattern recognition can be automated, but researchers should retain control over interpretation and analysis. AI should assist, not dictate (Schmitt, 2024; Christou, 2023).

  Mitigating the Risks of Bias and Automation

To counter the risks of bias and over-reliance on automation,  AI-generated outputs should be combined with manual reviews with the idea that while we harness AI’s efficiency we also ensure that the insights remain contextually accurate and ethically sound (Arbelaez Ossa et al., 2024).

  AI Literacy and Critical Engagement  

AI’s potential in qualitative research will only grow as it continues to evolve and in order to understand the future potentials and pitfalls of its integration, researchers need to learn about it and critically engage with its development. The rise of open-source tools could mean that AI will become increasingly accessible, but it’s up to the research community to ensure these tools are used responsibly.

References:

12 Data analysis tools for qualitative research. (2024, January 4). PhD Guidance. https://www.phdguidance.org/data-analysis-tools-for-qualitative-research/

Arbelaez Ossa, L., Lorenzini, G., Milford, S. R., Shaw, D., Elger, B. S., & Rost, M. (2024). Integrating ethics in AI development: A qualitative study. BMC Medical Ethics, 25(1), 10. https://doi.org/10.1186/s12910-023-01000-0

Bishop, L. (2023). A Computer Wrote this Paper: What ChatGPT Means for Education, Research, and Writing. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4338981

Christou, P. (2023). A Critical Perspective Over Whether and How to Acknowledge the Use of Artificial Intelligence (AI) in Qualitative Studies. The Qualitative Report, 28(7), 1981–1991. https://doi.org/10.46743/2160-3715/2023.6407

Crawford, K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.

Denecke, K., Glauser, R., & Reichenpfader, D. (2023). Assessing the Potential and Risks of AI-Based Tools in Higher Education: Results from an eSurvey and SWOT Analysis. Trends in Higher Education, 2(4), Article 4. https://doi.org/10.3390/higheredu2040039

Hassani, H., & Silva, E. S. (2023). The Role of ChatGPT in Data Science: How AI-Assisted Conversational Interfaces Are Revolutionizing the Field. Big Data and Cognitive Computing, 7(2), 62. https://doi.org/10.3390/bdcc7020062

Schmitt, B. (2024). Transforming qualitative research in phygital settings: The role of generative AI. Qualitative Market Research: An International Journal, 27(3), 523–526. https://doi.org/10.1108/QMR-08-2023-0107

Übellacker, T. (n.d.). AcademiaOS: Automating Grounded Theory Development in Qualitative Research with Large Language Models. Ar5iv. Retrieved October 5, 2024, from https://ar5iv.labs.arxiv.org/html/2403.08844

Zhang, H., Wu, C., Xie, J., Iyu, Y., & Cai, J. (n.d.). Redefining Qualitative Analysis in the AI Era: Utilizing ChatGPT for Efficient Thematic Analysis. Ar5iv. Retrieved October 5, 2024, from https://ar5iv.labs.arxiv.org/html/2309.10771