Across every field I know, including urban studies, environmental science, design, and social research, the presence or question of AI and its possibilities or challenges has become unavoidable. We seem to fall into two approaches: rush to embrace or fiercely say no, both reproducing the public discourse of AI as either salvation or doom. Yet AI is already sitting quietly in our spreadsheets and sensors, mailboxes, search results, and more visibly in our writing, mapping, and predictive tools. Still, what feels under-examined is not what AI can do, but what it does to the way we think, reason, and collaborate when we integrate it into our work. This is a part of a two-piece series where I try to explain strategies for using AI in research using a combination of theoretical and technical concepts and recent research or projects. This is important because, for research that draws from both quantitative and qualitative methodologies, AI creates a productive tension: it expands our analytical reach but risks shrinking our interpretive abilities. Machine learning systems can process mountains of data, but they don’t know what that data means. They can predict outcomes but not weigh their moral or political implications.
Rethinking “Human in the Loop”
The idea of Human in the Loop (HITL) could be more than a technical term; it is also a broader epistemic stance: that humans must remain active interpreters, mediators, and co-producers of knowledge within any AI-assisted inquiry. Traditionally, HITL meant that a human would supervise an automated system: approving an action, validating an output, correcting an error. In engineering, it was about efficiency, a safeguard against automation going rogue. But this loop can be re-imagined as a space of dialogue between human judgment and machine patterning, ensuring that data remains tethered to context and grounded in meaning. Rather than treating AI systems as autonomous decision-makers, HITL reasserts the value of human judgment, contextual reasoning, and ethical accountability throughout the research process, from data selection to interpretation and dissemination. It is a way of saying that AI should expand our interpretive capacity, not replace it.
In practice, the loop isn’t a chain of command but a conversation. The machine identifies patterns; the human interprets, questions, and adds meaning. Quantitative insights rely on qualitative understanding to become knowledge; data points become stories when someone asks why this pattern matters, to whom, and under what conditions. Most of the examples here come from urban and environmental research that has integrated AI tools over the past three years, focusing on climate resilience and community-engaged research.
You can see this principle in projects like the Pulo Aceh “AI for Blue Economy” initiative in Indonesia, where local fishers co-designed sensor networks with data scientists. The community made decisions about where sensors should go, how to interpret the readings, and how forecasts might actually support daily life (Diani, 2025). Or, consider the Appalachia Flood Resilience Dashboard, where researchers used generative AI to integrate real-time flood data and supplemented it with a Community Engagement Studio where residents and emergency responders shaped how the system looked and worked (Mills, 2024). The AI handled the automation; the humans defined its purpose. In both cases, human input and insight are not checkmarks in a workflow; they’re the interpretive spine of the research process.
Three Dimensions of Human in the Loop Research
When we keep humans actively engaged in AI-assisted research, three critical dimensions emerge:
- The Interpretive Dimension — Human actors translate computational outputs into contextually meaningful insights. They ask why a model predicts what it does and what it implies for lived realities. Interpretation ensures that AI supports reasoning rather than replacing it.
- The Ethical Dimension — Humans in the loop assess the moral and distributive consequences of automated decisions. They keep sight of who benefits, who bears risk, and whose voices are missing. Ethical oversight becomes a form of continual auditing, preventing biases and inequities from being naturalized within the model.
- The Participatory Dimension — Communities become co-designers and co-owners of AI systems, not merely data sources. This transforms research from extraction to collaboration, ensuring that knowledge production aligns with community goals and capacities.
When we think this way, HITL stops being a defensive mechanism (“make sure a human signs off”) and becomes a generative ethic. It is about making AI relational, situating it inside webs of human understanding rather than apart from them.
The Loop as a Space of Care
To keep humans in the loop is, at its core, an act of care. It’s a reminder that research, especially research augmented by AI, is not only about producing insights but about maintaining relationships: between data and meaning, between researchers and communities, between technology and justice. Feminist scholars have long argued that knowledge is not created through detachment or neutrality but through attentiveness and situated responsibility, what Carol Gilligan first described as an ethic of care grounded in relational understanding rather than universal rules (Gilligan, 1982). Seen through that lens, the “loop” is a space of ethical interdependence. It connects the coder to the community, the dataset to the lives it represents, and the researcher to the social worlds their work might transform.
In recent years, feminist AI theorists have expanded this understanding into what has been called a data ethics of care, urging designers and researchers to see data work as a form of relational maintenance. It’s the recognition that our tools are never neutral; they reflect the care, or lack thereof, invested in their creation. Feminist scholars like Eleanor Drage and Catherine D’Ignazio have shown how re-centering care and accountability in AI research transforms it from a managerial pursuit into a collaborative practice of response-ability, the ability to respond (Drage, 2023; D’Ignazio & Klein, 2020).
This framework of care helps us see HITL not as technical oversight but as a commitment to plural perspectives and community-led inquiry that treats participation not as supervision but as co-ownership of knowledge.
When the Human Leaves the Loop
But what happens when that interpretive link disappears? If HITL represents an ethics of co-production, then the absence of that loop reveals what happens when judgment, context, and lived experience are stripped out of research and decision-making. Across climate and urban applications, the consequences of human withdrawal are visible in algorithmic bias, opaque decision-making, and what scholars increasingly call epistemic injustice, the silencing of local and situated knowledge in favor of machine-generated abstraction(Chamuah,2023).
Ironically, we didn’t even need Generative AI to leave humans out of the loop. One glaring example is how, after Hurricane Katrina, recovery models built on “neutral” datasets failed to capture the realities of Black and low-income neighborhoods in New Orleans. The algorithms directed resources toward wealthier, whiter areas, effectively re-coding structural racism as mathematical fairness (Sustainability Directory, 2025). A similar dynamic emerged in AI-driven “climate-smart agriculture” in India. Platforms promised efficiency through satellite-based crop recommendations, designed to optimize planting decisions based on satellite and meteorological data (Malik, 2025). These systems ended up deskilling farmers and eroding traditional ecological knowledge. Local expertise, attuned to soil, seed, and seasonal rhythms, was displaced by algorithmic prescriptions that often prioritized commercial seed and insurance interests (Tsosie, 2012). In this case, removing humans from the loop didn’t improve efficiency or productivity; it just redefined expertise and relegated the farmers and their generational wisdom to passive users of a system they could neither question nor correct.
These examples raise a deeper question within the phrase “Human in the Loop”: which humans are we referring to? The term itself comes from military control systems, where the “human” was simply an operator ensuring that automated weapons obeyed commands. When imported into research, that legacy risks reducing oversight to a managerial act, one expert supervising the machine rather than diverse publics shaping what the system is for. To be meaningful today, Human in the Loop has to move beyond that narrow, technocratic origin toward a more democratic and grounded interpretation. The “human” cannot just be the researcher or data engineer; it must include the communities and participants whose lives the research touches. The loop should be a space of shared authorship where those once treated as research subjects become collaborators who help define the purpose, ethics, and benefits of AI.
This shift mirrors a broader transformation already underway in the social and environmental sciences: a move away from extractive, top-down approaches toward community-engaged and participatory research. In these frameworks, knowledge is co-produced through transparency, reciprocity, and shared decision-making. Translated into AI, this means designing systems that align with community values, training models on local knowledge, giving residents ownership of their data, and ensuring that the technologies deliver tangible benefits to those most affected by climate or infrastructural risk.
The big risk here is that in the rush toward embracing AI tools, we fuel a faster, less meaningful, and less reliable version of the old extractive logic. When these questions of which humans are ignored, AI simply repeats older patterns of extraction. Data is taken without consent, models predict community behavior without input, and algorithmic “findings” justify decisions that communities never agreed to. We see this in generative models that produce confident but fabricated environmental statistics, misinformation presented with the authority of truth.
Taken together, these cases illustrate that human absence is not a neutral design choice; it is a moral and epistemological failure. Without humans, and more precisely, without the right humans, in the loop, algorithms interpret social and ecological realities as if they were static, quantifiable, and apolitical.
Reclaiming Oversight as Collaboration
When humans re-enter the loop, something shifts. Oversight becomes collaboration.
The Pulo Aceh project is again instructive: the sensors and machine-learning models worked because they were co-designed by those whose lives depend on the ocean. The system’s intelligence was a blend of data and memory, technology and tradition (Diani, 2025). Likewise, the Appalachia dashboard wasn’t just a piece of software; it was a civic process. Residents helped define what “useful data” meant. The loop became a shared space for interpreting risk and action, not just for generating predictions (Mills, 2024).
These examples demonstrate HITL as a generative ethic, inviting us to treat research as a shared act of sense-making, one that requires care, patience, and plurality. The challenge now is to scale this sensibility into how we structure research itself. As AI seeps into every discipline, we need to move from ad-hoc ethics to institutional reflexivity, rules, habits, and cultures that keep the loop open.
That could mean building formal interpretive checkpoints into AI-driven projects, where results are reviewed by interdisciplinary teams not just for accuracy but for meaning. It could involve embedding community review boards into environmental modeling or pairing data scientists with qualitative researchers who can translate findings back into lived experiences. The loop, in other words, should expand horizontally to encompass communities and society as a whole.
Institutions can also adopt clearer protocols around transparency and consent. Some of the most effective AI projects in climate and infrastructure have relied on open communication between universities, city agencies, and communities, turning oversight into partnership rather than surveillance.
Finally, keeping humans in the loop also means keeping benefits aligned. AI research shouldn’t simply study communities; it should serve them. Community-based participatory research already does this in the social sciences. HITL can bring that ethos into the algorithmic realm, ensuring that machine learning projects generate tangible value for those whose data they rely on.
Human in the Loop, reimagined ethically, becomes an antidote to algorithmic abstraction, a commitment to maintaining the relationships that make knowledge meaningful, accountable, and just.
References
Chamuah, A. (2023). AI, Climate Adaptation, and Epistemic Injustice | Platypus. https://blog.castac.org/2023/07/ai-climate-adaptation-and-epistemic-injustice/
Diani, H. (2025, April 24). AI for climate resilience meets local know-how | GIZ. https://www.giz.de/en/newsroom/stories/ai-climate-resilience-meets-local-know-how
D’Ignazio, C., & Klein, L. F. (2020). Data Feminism. https://direct.mit.edu/books/book/4660/Data-Feminism
Drage, E., & Frabetti, F. (2023). AI that Matters: A Feminist Approach to the Study of Intelligent Machines. In J. Browne, S. Cave, E. Drage, & K. McInerney (Eds.), Feminist AI: Critical Perspectives on Algorithms, Data, and Intelligent Machines (p. 0). Oxford University Press. https://doi.org/10.1093/oso/9780192889898.003.0016
Gilligan, C. (1982). In a different voice: Psychological theory and women’s development (pp. xxx, 184). Harvard University Press.
Kathrepta, C. (2025, September 11). 5 AI Climate Resilience Projects Making Real Impact in 2025. https://ecoskills.academy/5-ai-climate-resilience-projects-making-impact/
Malik, A. (2025). Linking climate-smart agriculture to farming as a service: Mapping an emergent paradigm of datafied dispossession in India. ResearchGate. https://www.researchgate.net/publication/365388707_Linking_climate-smart_agriculture_to_farming_as_a_service_mapping_an_emergent_paradigm_of_datafied_dispossession_in_India
Mills, A. (2024, October 11). Climate Change And Community: New ESIP Lab Awards – ESIP. https://www.esipfed.org/climate-change-and-community-new-esip-lab-awards/
Sustainability Directory. (2025). How Does Data Bias Affect Urban Planning? → Question. Climate → Sustainability Directory. https://climate.sustainability-directory.com/question/how-does-data-bias-affect-urban-planning/
Tsosie, R. (2012). Indigenous Peoples and Epistemic Injustice: Science, Ethics, and Human Rights. Washington Law Review, 87(4), 1133.



