The Walled Garden: Care and Containment in AI Research

Introduction: On the Ethics of Boundaries

AI, especially in research, is often imagined as an infinite horizon, with models expanding endlessly, datasets merging across domains, and knowledge networks collapsing boundaries at turbo speed. But as the past few years have shown, when everything becomes open, risks accumulate and very little remains meaningful or accountable. This is the second part of a two-piece series where I try to explain strategies for using AI in research using a combination of theoretical and technical concepts and recent research or projects. You can read the previous piece on Human in The Loop here.

Pictures of two English and Persian Walled Gardens
Eg. English and Persian Walled Gardens

The metaphor of the Walled Garden helps make sense of this paradox. In the digital world, the phrase initially described proprietary ecosystems like Apple’s App Store or Facebook’s platform economies, where control was synonymous with exclusion. But here, I want to reclaim the term in a more generative way that allows for a more ethical and ecological sense. Consider two traditions of walled gardens: the English walled garden, a space of private exclusivity where walls create enclosed order and controlled nature, versus the Persian walled garden, a larger, less manicured and more naturalistic public space where walls serve not to exclude but to protect, where boundaries enabled the cultivation of greenery and life in harsh climates through careful water management and shared stewardship. The walls in this example weren’t about restricting access but about creating conditions for flourishing, a commons that needed protection to remain generative. This is the model I want to invoke for AI research: a Walled Garden without exclusive control and data extraction. Instead, it works as a framework of bounded commons, where walls enable cultivation, boundaries create conditions for community-engaged knowledge to grow, and protection serves collective flourishing rather than private ownership.

Every dataset, model, and community operates within its own ecology of meaning. Without boundaries that reflect those contexts, AI research risks becoming extractive, mining data, scraping narratives, or reproducing inequities in the name of innovation. Across the climate and urban case studies I’ve encountered, the difference between projects that built trust and those that collapsed often came down to an ethical boundary design problem. When data flows were transparent, locally governed, and contextually grounded, as in Lisbon’s flood digital twin or Pulo Aceh’s community-owned ocean sensors, the technology deepened collaboration and resilience (Diani, 2025). Conversely, when those boundaries dissolved, as in Toronto’s Sidewalk Labs project, AI became surveillance, not stewardship(Bozikovic, 2019).

The Walled Garden, then, is a metaphor for careful containment: an approach that links accountability to context, safety to transparency, and openness to trust. It invites us to think of governance not as control but as cultivation — an ongoing practice that keeps research generative while protecting it from extractive logics. Scholars like María Puig de la Bellacasa remind us that care is not just emotion but maintenance; it includes the work of sustaining fragile relations between humans, technologies, and the more-than-human world (De la Bellacasa, 2017). Boundaries, in this sense, are acts of care: they make space for coexistence, consent, and context. In AI research frameworks, these walls are not barriers to knowledge but interfaces of responsibility. They define who gets to handle data, under what conditions, and how decisions ripple beyond the lab.

Productive Containment: Examples of Ethical Boundaries

These boundaries come to life in projects where containment becomes productive rather than restrictive, where limits protect context, consent, and trust. Lisbon’s Flood Digital Twin uses AI to simulate storm impacts, but its success lies in its boundaries. The system is built on municipal data, managed by city engineers, and governed within a transparent, local framework (Prime, 2022). The data stays rooted in the city’s institutional ecosystem, where residents can trace how the model shapes decisions about their neighborhoods.

The Amsterdam LIFE project takes a similar approach. The digital twin integrates data from the city’s grid, solar arrays, and battery systems, but remains locally governed, a bounded environment that balances transparency with privacy (Fabrique, 2025). By containing data flows within municipally managed infrastructure, Amsterdam avoids turning sustainability data into surveillance.

The Pulo Aceh “AI for Blue Economy” initiative in Indonesia also takes containment into community governance. Local fishers helped decide where ocean sensors would be placed and how data would be shared (Diani, 2025). The information stays in community hands, circulated through village meetings rather than uploaded into anonymous cloud repositories. The boundary is cultural and relational: it protects the community’s knowledge system from being abstracted or commodified.

These examples show that containment, when guided by care, becomes protection through design. Each wall, technical, institutional, or cultural, anchors AI within the ecology it serves. In a world obsessed with frictionless data, these projects embrace friction as ethics, a necessary slowness that keeps technology accountable.

The Cost of having No Walls

When data circulates without consent, when models operate without oversight, and when technology scales faster than reflection, the walls collapse. What follows is not openness but exposure: systems that promise innovation instead reproduce exploitation.

Toronto’s Sidewalk Labs project promised a neighborhood built “from the internet up,” powered by ubiquitous sensors and AI-driven decision-making. Yet what began as sustainability became surveillance and data extraction. Residents realized the data infrastructure was designed without community consent and would place corporate actors in control of public life (Euklidiadas, 2024). When AI systems are unbounded, accountability becomes no one’s responsibility.

A similar dynamic plays out across proprietary risk-scoring tools, urban analytics dashboards, and generative models trained on scraped public data. These systems operate without clear limits. Their developers celebrate scalability, but what they really scale is moral distance, the separation between those who design and those who are affected. The lack of containment also enables misinformation and energy exploitation. Generative AI fabricates environmental data with the polish of authority, while large-scale model training consumes staggering electricity and water, contributing to the climate crisis these systems claim to mitigate.

Building Gardens, Not Fortresses

If unbounded AI exposes us to harm, the answer isn’t to wall technology off but to build gardens, spaces that balance openness with care. This means designing boundaries that are transparent, participatory, and protective, not restrictive. A research garden begins with contextual boundaries: clarity about what data is being gathered, where it comes from, and whose lives it represents. Building a boundary means pausing to understand those origins and secure consent, embedding documentation, provenance tracking, and participatory review into every project phase.

Next come access boundaries, the ethical fences that determine who can interact with a system and under what conditions. Ethical containment reframes access as responsibility, inviting collaboration while protecting against extraction. The Appalachia Flood Dashboard models this balance by keeping its data pipeline open to community partners but closed to commercial use (Mills, 2024).

Finally, every research garden needs environmental boundaries, recognizing that AI has a footprint. Training large models consumes resources and produces emissions. Cities and universities can address this by adopting greener infrastructure, disclosing energy metrics, and prioritizing efficiency. A responsible garden manages its own ecology.

Institutions can weave these principles into research governance: community advisory boards, open yet secure repositories, transparent model cards, and ethical audits. A fortress isolates; a garden connects. A fortress guards knowledge as property; a garden shares it as sustenance. Building Walled Gardens for AI research is about reclaiming stewardship from control, drawing boundaries that make collaboration safer, slower, and more reciprocal.

Boundaries as Ethics: The Walled Garden as a Framework of Care

If the Human in the Loop essay asked how interpretation keeps research human, the Walled Garden asks how boundaries keep it ethical. The two are inseparable. Where the loop values collaboration, the garden values stewardship, an ethic of protection grounded in care.

Feminist scholars have long reframed care as a political and epistemological practice. As María Puig de la Bellacasa writes, “care is everything that is done to maintain, continue, and repair our world so that we can live in it as well as possible” (De la Bellacasa, 2017). To build a Walled Garden for research is to engage in this maintenance: to construct limits that sustain relationships between knowledge, power, and accountability.

Boundaries, when built with care, are not barriers; they are acts of responsibility. They delineate the edges of consent, the shape of transparency, and the rhythm of reciprocity. In AI research, where data moves faster than reflection, these boundaries slow us down just enough to ask: Who benefits? Who decides? Who bears the risk?

The Walled Garden is not an argument for retreat but for rootedness. It envisions research as an ecosystem, not a marketplace, where data is cultivated with care, tools are governed with integrity, and communities remain part of the conversation. In a time when technological ambition outpaces ethical reflection, the garden offers a different rhythm: patient, deliberate, and protective of what truly matters.

Boundaries, like care, are what make flourishing possible. Without them, everything, knowledge, justice, even trust, evaporates into abstraction. With them, we build spaces where research can grow safely, meaningfully, and in relation to the worlds it hopes to serve.

References

Bozikovic, A. (2019, January 15). Google in Toronto: A Question of Privacy. Architect Magazine. https://www.architectmagazine.com/design/google-in-toronto-a-question-of-privacy_o

Chamuah, A. (2023). AI, Climate Adaptation, and Epistemic Injustice | Platypus. https://blog.castac.org/2023/07/ai-climate-adaptation-and-epistemic-injustice/

De La Bellacasa, M. P. (2017). Matters of Care: Speculative Ethics in More than Human Worlds. University of Minnesota Press. https://www.jstor.org/stable/10.5749/j.ctt1mmfspt

Diani, H. (2025, April 24). AI for climate resilience meets local know-how | GIZ. https://www.giz.de/en/newsroom/stories/ai-climate-resilience-meets-local-know-how

D’Ignazio, C., & Klein, L. F. (2020). Data Feminism. https://direct.mit.edu/books/book/4660/Data-Feminism

Drage, E., & Frabetti, F. (2023). AI that Matters: A Feminist Approach to the Study of Intelligent Machines. In J. Browne, S. Cave, E. Drage, & K. McInerney (Eds.), Feminist AI: Critical Perspectives on Algorithms, Data, and Intelligent Machines (p. 0). Oxford University Press. https://doi.org/10.1093/oso/9780192889898.003.0016

Euklidiadas, M. (2024, January 19). Sidewalk Toronto, the vision behind Google’s failed city—Tomorrow.City—The biggest platform about urban innovation. https://www.tomorrow.city/sidewalk-toronto-the-vision-behind-googles-failed-city/, https://www.tomorrow.city/sidewalk-toronto-the-vision-behind-googles-failed-city/

Fabrique. (2025). Local Inclusive Future Energy (LIFE) City platform. AMS. https://www.ams-institute.org/urban-challenges/urban-energy/local-inclusive-future-energy-life-city-platform/

Gilligan, C. (1982). In a different voice: Psychological theory and women’s development (pp. xxx, 184). Harvard University Press.

Kathrepta, C. (2025, September 11). 5 AI Climate Resilience Projects Making Real Impact in 2025. https://ecoskills.academy/5-ai-climate-resilience-projects-making-impact/

Malik, A. (2025). Linking climate-smart agriculture to farming as a service: Mapping an emergent paradigm of datafied dispossession in India. ResearchGate. https://www.researchgate.net/publication/365388707_Linking_climate-smart_agriculture_to_farming_as_a_service_mapping_an_emergent_paradigm_of_datafied_dispossession_in_India

Mills, A. (2024, October 11). Climate Change And Community: New ESIP Lab Awards – ESIP. https://www.esipfed.org/climate-change-and-community-new-esip-lab-awards/

Prime, G. W. (2022, March 4). Lisbon’s City Scale Digital Twins for Flood Resilience. Geospatial World. https://geospatialworld.net/prime/case-study/aec/lisbons-city-scale-digital-twins-for-flood-resilience-2/

Sustainability Directory. (2025). How Does Data Bias Affect Urban Planning? → Question. Climate → Sustainability Directory. https://climate.sustainability-directory.com/question/how-does-data-bias-affect-urban-planning/

Tsosie, R. (2012). Indigenous Peoples and Epistemic Injustice: Science, Ethics, and Human Rights. Washington Law Review, 87(4), 1133.