Digital Humanities Tools for Beginners and Non-coders

Dear beginners to Digital Humanities and non-coder academics,

I have some good news for you: you can begin your digital humanities project and explore different tools before you learn to code or not learn to code at all. As scholars, we continue to learn and decide which digital skills are best suited for our research and projects. We may learn a skill and later decide our efforts are best spent honing a different digital skill or methodology. The most common coding languages we, the GCDFs, use and teach are R and Python. But, until you decide to learn either, both, or another coding language, there are tools you can use to execute your digital research and projects. 

A tool that many of us have encountered and that we might not necessarily think of as a DH tool is a spreadsheet softwares in which data is arranged in rows and columns and can be used to make calculations or re-organized to reveal patterns. Spreadsheets are a great tool to store, organize, clean, analyze, and even create simple visuals of data points. They are a helpful beginner tool that can assist you in deciding if you need to use more dynamic systems like databases or write code to perform more complex analysis and synthesis of your data. 

A tool that utilizes the simple spreadsheet software is Knight Labs Timeline. TimelineJS is an open- source tool that allows you to visualize your data into an interactive timeline to include text, maps, images, and audio! An example of a project that uses Kinght Labs’ Timeline is Jenna Queenan’s 20 Years of History: New York Collective of Radical Educators

Another way spreadsheets can be used is to analyze text. Both Google Sheets and Microsoft Excel feature “Analysis Content” add-ons to allow users to conduct sentiment analysis or topic detection. However, what if you have a machine readable text and you want to use it as a corpus to conduct a text analysis? (A machine readable text: an image, handwritten, or printed text encoded into a digital data format to be machine recognizable. Think of a document where you can highlight individual characters as opposed to a document where you are unable to highlight a single character and the entirety of the document is highlighted because the machine recognizes it as one large character. Or think of those scholarly articles that can be read to you by your text-to -speech application. Both being able to highlight individual characters in a document and having the document read to you indicates that the text is machine readable.) So, if you have a text formatted in plain text, HTML, XML, PDF, RTF, or MS Word, even if it is in a Language Other than English, you can use an open-source tool called Voyant-Tools to upload your corpus or corpora and conduct the text analysis. Voyant-Tools can also assist in widely-reading (or distant reading) a text or formulating research questions

The benefit of both TimelineJS and Voyant-Tools is that they allow you to either simply use them as is or expand the scope of your project further you become a coder. These tools are examples of open-source, web based, non-coder tools that allow for both beginner DHers and non-coders to gain access to DH strategies and methodologies while avoiding the cost of non-coder proprietary tools. For more DH tool options, both for advanced users and beginners, check out University of Toronto’s Find Digital Scholarship Tools website. 

For more on DH methodologies and applications, go to our blog and check out our catalog of  events

Best wishes, 

Your fellow (learning to code) DHer.

students working on laptops at same table

Call for Proposals: Provost’s Digital Innovation Grants 2019-2020

Applications for Provost’s Digital Innovation Grants – also known as PDIGs – are open! If you’re working on a digital project, planning to, or hoping to attend a short course or workshop to learn a skill that may support a current or future digital project, then this grant opportunity is for you!

PDIGs are broken out into 3 tiers – Training, Start-up, and Implementation grants – all of what are during by 5:00 PM on Friday, October 18, 2019

Learn more about this Call for Proposals: Provost’s Digital Innovation Grants 2019-2020 and review past projects on our digital grant website.

Digital scholarship drop-in hours

Are you working on learning a digital skill, do you have questions regarding a digital project, or are you seeking reviews about using a digital tool?

The Digital Fellows offer weekly virtual digital scholarship drop-in hours. In Spring 25, the weekly hours are on Monday from 11 a.m. to 1 p.m. This is a space to engage curiosity, troubleshoot challenges, and build community so we grow together! 

Register here to receive the Zoom link for drop-in hours. See you soon! 

Digital scholarship drop-in hours

Are you working on learning a digital skill, do you have questions regarding a digital project, or are you seeking reviews about using a digital tool?

The Digital Fellows offer weekly virtual digital scholarship drop-in hours. In Spring 25, the weekly hours are on Monday from 11 a.m. to 1 p.m. This is a space to engage curiosity, troubleshoot challenges, and build community so we grow together! 

Register here to receive the Zoom link for drop-in hours. See you soon! 

A white screen with social media icons floating around it.

The Social Elsewhere: A Look at Alt-Social Media

Over the past few years, internet users have expressed growing distrust and distaste toward dominant corporate social media sites like Facebook, Twitter (now X), TikTok, and YouTube. For some insight into this sentiment, one might point to Elon Musk’s hostile takeover of Twitter which was followed by a surge in hate speech on the platform (a trend that persisted until at least May 2023), Meta’s announcement to rollback content moderation just weeks before President Trump’s second inauguration, or the fact that TikTok is currently being held politically and legally captive by the country’s ruling elites, to name a few. Add to this list the growing amounts of data surveillance that is captured, repackaged, and sold by these social media companies daily, and the picture becomes even clearer. 

Despite these trends, digital connection is an indispensable part of social life in the twenty-first century; social media is a critical venue for cultural preservation, knowledge production, organizing, and resisting oppressive regimes. So the question arises, what are we left to use? Where else can we turn? 

While the definition of alternative social media is somewhat fluid (and, as Robert Gehl reminds us, the term “alternative” is both highly contested and historically situated 1), it “can be seen as a critical response to CSM [corporate social media] that not only allows for users to share content and connect with one another but also denies the commercialization of speech, allows users more access to shape the underlying technical infrastructure, and radically experiments with surveillance regimes.” 2 Further, Couldry and Curran argue that a key function of alternative media is that it challenges actual concentrations of media power that directly shape how we live, learn, and know.3

Alternative social media can be identified as having three common features: (1) they reject the idea that social media platforms should predominantly serve as profit-making enterprises, (2) many rely on non-centralized infrastructures to ensure that data is not trapped in large corporate servers, and (3) many alternative social media sites utilize free and open-source software–meaning that they “create and share tools that can be studied and changed by anyone to create a variety of social media platforms with different affordances.” 4 Additionally, alternative social media can be defined by content that serves narratives traditionally outside of the mainstream ethos–some of which can be liberatory, anti-racist, feminist, progressive, and queer, and some that can be discriminatory, sexist, homophobic, and hateful (though it can be argued that the latter is increasingly being absorbed into the mainstream discourse). 

Although it is important to note that not all alternative social media platforms seek benevolent ends, ultimately we–as users–have the power to decide which are deserving of our content and engagement. Here are a few that you might consider if you are looking to break away from CSM: 

  1. Mastodon: Mastodon allows users to choose from a variety of independent servers centered on specific topics, themes, and interests. 5 It is open source, completely ad-free, and the platform does not allow for promoted content, offering a clutter and distraction-free environment. 
  2. Bluesky: Bluesky is another decentralized platform that allows users to control their data and move between servers. There are no targeted ads and posts are free from algorithmic interference (though you can explore algorithm-driven feeds if you choose). 
  3. Pixelfed: Pixelfed is an open-source image sharing platform, similar to Instagram, that is “decentralized, ad-free, [has] chronologically curated feeds by default, [is] respectful of user privacy, and [is] anti-surveillance.” 6
  4. Cara: As an art-specific platform, Cara does not allow AI-generated images nor does it permit AI models to train themselves using any of the artwork shared on the platform. While the scope and use of this site is limited, it is a great option for artists looking to share, and also protect, their work. 

Studies have shown that highly consolidated ownership of media reifies existing power structures, prioritizes consumerism over citizenship, and promotes political conservatism. 7 In our current moment, faced with precarity, global struggle, and economic distress, alternatives are not just welcome, but necessary. Social media is being monopolized, but users cannot forget that we are the ones that hold the power. Non-corporate social media platforms stand to teach us about “different economies, governance structures, and aesthetics that are driven by goals other than profit;” 8 our only task is to make use of them. 



An icon of an image on a website, and the word "alt" next to an icon of "text"

Reflecting on Alt Text and its Ethics

Recently, I attended “Art for Alt Text and The Pedagogy of Description.” The workshop is part of Alt Text as Poetry, a project by Bojana Colklayt and Finnegan Shannon that explores poetics as a starting point for accessibility, and was based on their workbook

Shannon highlighted that many people first think of accessibility for people with visual disabilities; however, they are not the only population that benefits from alt text. For example, individuals with ADHD can use screen readers as a tool to focus

Alt text has more to it. It can also allow access in cases of low internet connection if people turn off their images in the email or browser. Shannon commented that alt text has also been used for search engine optimization, a process used for marketing by placing keywords.

The workshop revolved around creating alt text for artwork images. It was a challenge. We started with the prompt to describe the image to a person who cannot see it. Here, we focused on how objectivity is not the goal. In Shannon’s experience, we have to center positionality to allow readers to understand what kind of relationship we are creating with the image. They said, “With a description, you learn as much about the description as you learn about the object that is described.” The same applies to alt text; the difference is that the latter needs to be short (we don’t want to insert an entire composition for one image!), and it has to respond to the context where the image appears. 

I loved how the workshop approached writing alt text in the same hierarchy as any other writing practice. Shannon mentioned how we can think of the different genres of alt text. One of the overarching questions was what happens if you need to describe a cartoon, a joke, or a meme vs what happens in the description of a concept map that is part of an academic paper?

Ethics surrounding alt text

Sighted people are used to seeing images while reading. This process happens in the blink of an eye, so we usually don’t realize how much information is gathered from the relationship between text and images. Multimodal text analysts have been working on understanding the relationships between different meaning-making resources (such as text, images, audio, colors, gestures, etc.); a big focus has been on text-image relationships. They have seen how images can be used to extend meaning, give specific examples, and make abstract information more concrete, among many other uses. Thus, without alt text, readers who need to rely on screen readers, or who have to turn off their images for some reason, do not have access to the entire meaning of the text.

In other words, the decision-making power of what is (or is not) made available in alt text is held by sighted people with visual access to the images. This is a huge responsibility from an ethical perspective. When practicing the description exercise, I thought of how each word was a decision that implied what I was sharing of the world. 

Thus, writing alt text should not be taken as a mere requisite for complying with disability policies, such as Section 508 of the Rehabilitation Act of 1973 or following Web Content Accessibility Guidelines. From an ethical perspective, alt text is a commitment to the world we want to live in. This commitment challenges us to create quality alt text in everything we do, from academic projects to everyday interactions.

Many of us who don’t need alt text to access information are not familiar with it and its aesthetics, and even think of “alt text” as boring. Alt text as poetry centers on the power, beauty, and rich possibilities of alt text. Shannon compares it to translation: how can we make alt text beautiful and communicative? I think of transcreation, a concept from translation studies which focuses on the fact that translating has an implicit creative task in which the translator is not just copying the text into a new language, but creating a new text. This same principle applies when we write alt text for an image.

“Who is in charge of alt text?” – someone asked at the workshop. In this piece, I chose the term ethics to refer to the practice of alt text as an ethical decision that we can make as text producers. From this perspective, we take the responsibility upon ourselves to create the alt text – it cannot be “someone else’s job.” 

A digital fellow experience

When collaborating on the Command Line Workshop, a particular challenge arose with alt text for this image:

With the pedagogical goal in mind, I decided to write the following:

“This screenshot of the Command Line trying to read a .docx file shows a very long string of symbols, letters from different alphabets, and even characters that our fonts cannot recognize (which are question marks). Here we reproduce only a tiny part of the long result to give you a bit of the taste of the nonsense it is for humans: exclamation mark control character question mark l Z square bracket Content_Types square bracket .xml question mark question mark question mark question mark n question mark 0E” 

Alt text should be part of the entire writing process rather than an afterthought. It should be part of our draft. We start our process by making clear what the function of the image is for the text and creating alt text that makes a meaningful contribution to the reader. Then, other participants in the text production process can provide feedback at different stages. For example, in the Command Line Workshop, I received feedback from fellows and faculty from GC Digital Initiatives. Alt text deserves as much attention as any other part of the text: it is part of the meaning we create for our readers!  

Data About Data: Best Practices for Metadata

Why care about metadata? In digital humanities, metadata can describe our objects of study, helps us find relevant information, shape how we understand or present our topics, and more. This workshop will discuss ethical issues with metadata re-use and creation, provide an overview of common metadata standards, and introduce tools for cleaning and manipulating metadata.

Registration for workshops is limited to students, faculty, and staff who are affiliated with the CUNY Graduate Center. Please note that you might need to log in to CUNY Academic Commons to be able to RSVP.

Let’s Build a Browser Extension!

Browser Extension Basics

Browser extensions are simple tools that allow you to enhance your web browsing experience and workflows. For instance, one of my favorite extensions, Hypothes.is, allows you to annotate any information you find on the web and even share it publicly with others, making it incredibly useful for research. If you are using Chrome, you can visit the web store and find extensions for virtually any task. However, you might occasionally find yourself wanting an extension for a very specific use case that is not currently available. A neat way to address this is simply to build your own!

Building your own extensions is a (relatively) simple process once you understand what is needed. This tutorial will introduce you to the basics of building browser extensions and guide you through the creation of your very first one. We’ll use the Chrome browser for this tutorial, but the same process can be followed for other browsers as well.

To keep things simple as a first example, we’ll create a basic extension that injects a banner with a message to every web page we visit. While perhaps not particularly interesting or useful, it will serve to show you how to modify web pages directly with extensions. The fundamental skills you will learn will be essential when creating more complex extensions.

Prerequisites

– Basic knowledge of HTML, CSS, and JavaScript
– Google Chrome browser
– A text editor (such as VSCode)

Step 1: Create Your Project Folder

1. Create a new folder on your computer named banner-extension and open the folder in VSCode.
2. Inside this folder, we’ll create three files:
   – manifest.json
   – content.js
   – styles.css
Make sure to include the extensions (.json, .js, .css) when creating the files so your editor knows what they are. These three files will work in tandem to provide the functionality of your extension.

Step 2: Configure Your Manifest File

The manifest file is the “configuration file” for your extension, written in JSON format (a standard for data serialization). It tells Chrome what your extension does and what permissions it needs. Essentially, it acts as a blueprint for the browser, telling it how to load, install, and manage the extension. Each extension you make will need its own manifest. Let’s go ahead and fill it out with the following information:
{
  "manifest_version": 3,
  "name": "Simple Banner Extension",
  "version": "1.0",
  "description": "Adds a banner to the top of webpages",
  "permissions": ["activeTab"],
  "content_scripts": [{
    "matches": ["<all_urls>"],
    "js": ["content.js"],
    "css": ["styles.css"]
  }]
}
Here is a breakdown of our file:
manifest_version: Specifies which version of the manifest specification to use (version 3 is the current standard)
name, version, description: Basic information about the extension
permissions: What your extension is allowed to do (`activeTab` allows it to run on the current tab)
content_scripts: Scripts that run in the context of web pages
  – matches: Which pages your scripts run on (`<all_urls>` means all websites)
  – js: Here we reference the JavaScript file we created that will control what the extension does
  – css: Here we reference the CSS file we created, which will handling the visual style of our extension

Step 3: Write the Content Script

The content script contains the JavaScript that will run on webpages. This is where we’ll write the code that creates our banner. Add the following to the content.js file:
// Very simple functionality for adding a banner to webpages

// Function to create and add the banner
function addBanner() {
  // Create a new div element for our banner
  const banner = document.createElement('div');

  // Add the 'chrome-extension-banner' class to apply the CSS styling we'll use
  banner.className = 'chrome-extension-banner';

  // Set the text content
  banner.textContent = 'This page was modified by a Chrome extension!';

  // Insert the banner at the top of the HTML body
  document.body.insertBefore(banner, document.body.firstChild);
}

// Run our function when the page loads
window.addEventListener('load', addBanner);

Here we define a function addBanner() that creates a new div element for our banner. We also give it a class name so we can style our banner with CSS and then set its text content to include a simple message. To easily see the banner we place it at the top of the web page. Lastly, we set up an event listener to run our function when the page loads.

Step 4: Style With CSS

The CSS file will style our banner to make it look nice. Let’s add some basic color, alignment, and font styling to our styles.css:
.chrome-extension-banner {
  background-color: #4285f4;
  color: white;
  text-align: center;
  padding: 10px;
  font-size: 16px;
  font-family: Arial, sans-serif;
  box-shadow: 0 2px 4px rgba(0, 0, 0, 0.2);
}
Here we style the banner with a blue background and white text. We then center the text and add some padding around it. Lastly, we add a subtle shadow to make the banner stand out.
Great! We have all the files and configuration needed to run our extension. Next, let’s load it into Chrome.

Step 6: Load Your Extension in Chrome

To load and enable your extension in Chrome, follow these steps:
1. Open Chrome and go to chrome://extensions/
2. Enable “Developer mode” by toggling the switch in the top-right corner
3. Click the “Load unpacked” button
4. Select your banner-extension folder
5. Your extension should now appear in the list of installed extensions
If you now use your browser to navigate to a web page (any page), you should now see a blue banner at the top of the page with the text “This page was modified by a Chrome extension!”
Congrats! You have just made your first browser extension. Let’s review how this extension works:
1. When you visit a webpage, Chrome checks if any extensions have content scripts that should run on that page
2. Since our extension’s manifest specifies “matches”: [“<all_urls>”], our content script runs on all pages
3. The content script adds a banner to the top of the page
4. The CSS styles are applied to make the banner look nice
While this is a simple example, it demonstrates the core concepts of Chrome extension development:
– The manifest file configuration
– Content scripts that modify webpages
– CSS styling for your injected elements
Note: To remove the extension, you can go to chrome://extensions/ and click Remove.
Now that you have the basics of extension development in hand, I’d encourage you to think of what kinds of extensions could benefit your work and personal projects. Feel free to schedule a consultation with the Digital Fellows if you’d like to discuss ideas or want some additional help. Happy coding!

Beyond the Map: Visual Literacy & Storytelling That Sticks

Story maps have exploded in popularity lately, becoming the go-to tool for effectively communicating data-driven narratives across fields like education, journalism, research, and urban planning. Their strength lies in effortlessly weaving together data visualization and narrative, making complex spatial information approachable and engaging—even for those who might not know the first thing about traditional cartography. It’s no surprise that both teachers and researchers find them particularly appealing; story maps offer an accessible platform for students and teachers to explore spatial stories, especially by combining various modalities of information. 

But as story maps spread far and wide, an interesting tension emerges: their true effectiveness often relies less on sophisticated cartographic know-how and more on the craft of visual storytelling itself. I noticed this, particularly during a recent story mapping workshop I led. Participants mostly came as beginners to mapping. At the outset, I thought what I needed to spend the most time on would be grappling with the many technical intricacies of mapping. But that wasn’t really the case, since those were far easier to communicate or execute than the visual effectiveness of the maps. That can be problematic since maps, despite their apparent simplicity, inherently embed significant messages through subtle choices around scale, framing, direction, and symbolism. This is where visual literacy becomes critical. Broadly defined, visual literacy is the ability to interpret, negotiate, and create meaning from visual information—a skillset essential for both interpreting and making effective story maps. Recent research also points out that good visual communication—especially interactive visualizations—depends heavily on understanding user attention, consciousness, and memory. Basically, a map isn’t effective if it can’t hold your attention or stick in your memory (He, 2022). Participatory mapping initiatives like OpenStreetMap marathons (mapathons) further highlight that visual literacy isn’t just passive; it’s deeply tied to active engagement with visual information (McGowen, 2020). Contemporary research even pushes for new interdisciplinary frameworks to understand visual methods, showing how central visual communication has become in working with spatial data. Intriguingly, recent neurological studies show that the brain processes visual contrasts, colors, and symbols in maps in ways that significantly shape how—and if—we actually remember or care about the visual stories we see (Hu & Hwang, 2024).

Another lens that helps us unpack the subtle complexities of map-making is semiotics—the study of signs and symbols. I know it seems repetitive, but here it goes again: Maps aren’t neutral; every symbol, color, and line is loaded with cultural, social, or political significance. Understanding the semiotics of maps helps creators and audiences recognize these implicit narratives and makes us more thoughtful storytellers and readers alike. As cartographic semiotics expert Emanuela Casti points out, maps don’t merely represent reality—they actively shape our perceptions of places and the actions we take within them.

Given these insights, what makes a story map effective? It’s all about thoughtfully combining visual literacy, semiotics, and good old-fashioned narrative storytelling. Practical considerations like scale, color, direction, and context become crucial narrative tools. Scale determines what’s emphasized or hidden, color influences emotional reactions, direction guides viewers through your story, and context helps people understand why your map matters in the bigger picture. Being deliberate about these visual components can elevate your map from something merely informative to something genuinely compelling.

Yet even knowing this doesn’t guarantee success. Common pitfalls still trip up many aspiring visual storytellers. You might recognize a few of these from your own experiences (no judgment—maps are tricky!):

  • Vague or missing annotations: Good maps clearly label significant features, making sure the audience knows exactly what they’re looking at—and why.
  • Disconnected narratives: Visual data should explicitly tie back into the broader story. Otherwise, you’re just mapping things without context, leaving readers wondering why it matters.
  • Ignoring visual hierarchy: Without clear visual hierarchies, viewers get lost or overwhelmed. Think of it as telling a story without punctuation—it’s easy to lose the plot!
  • Information overload: If a map is cluttered with extraneous details, it’s less a narrative and more of an eye exam.
  • Randomized visual elements: Consistency is key. A hodgepodge of colors, symbols, and fonts makes your map look like a visual puzzle instead of a coherent story.

To sidestep these pitfalls, here are a few guiding principles to keep your story maps engaging, effective, and visually literate:

  • Prioritize Clarity: Annotate thoughtfully. Label critical information and explicitly connect visuals to your narrative.
  • Simplify Ruthlessly: Only keep what’s essential to your narrative. Your audience will thank you.
  • Leverage visual hierarchy intentionally: Direct viewers’ attention to key elements by strategically adjusting color contrast, scale, and emphasis.
  • Engage purposefully: Use interactivity and multimedia thoughtfully—just because you can add animations doesn’t mean you always should!

Practically speaking, visual literacy and semiotics offer invaluable tools for educators, researchers, journalists—and pretty much anyone telling stories with spatial data. Cultivating these skills helps both creators and audiences critically engage with maps, encouraging thoughtful interpretation and informed interactions.

Ultimately, the rise of story maps highlights a broader shift: visual literacy is becoming a necessary skill for anyone engaging with digital narratives. As story maps transform how we communicate spatial data, mastering visual storytelling becomes just as important as mastering geographic tools themselves. Whether you’re mapping flood risk, historical trends, or the best coffee shops in town, understanding the subtle power of visual communication is the secret to making your maps meaningful, memorable, and—most importantly—impactful.

 

References:

 

He, X. (2022). Interactive Mode of Visual Communication Based on Information Visualization Theory. Computational Intelligence and Neuroscience, 2022, 4482669. https://doi.org/10.1155/2022/4482669

McGowan, B. S. (n.d.). OpenStreetMap mapathons support critical data and visual literacy instruction. Journal of the Medical Library Association : JMLA, 108(4), 649–650. https://doi.org/10.5195/jmla.2020.1070

(PDF) Cultivating visual literacy and critical thinking tendency with technological knowledge organizing supports: A concept mapping-based online problem-posing approach. (2025). ResearchGate. https://doi.org/10.1007/s11423-024-10394-6

 

Data visualization in R

This workshop is designed to introduce some of the basic techniques of data visualization in R with ggplot2 (an R package). The session will walk you through creating plots commonly used and seen in data analysis—including scatterplots, histograms, box plots, and line charts. You will gain practical experience in selecting the right graphs for your data, customizing them for better visualization, and producing publication-ready outputs. Please note that basic R knowledge is required for this workshop.

 

Registration for workshops is limited to students, faculty, and staff who are affiliated with the CUNY Graduate Center. Please note that you might need to log in to CUNY Academic Commons to be able to RSVP.

Hidden Costs of AI and the Case for Luddite Thinking

In conversations about technology, being called a Luddite is a big insult. Colloquially, it refers to a kind of technophobic kook who’s morally against technological development. But in my opinion, the original Luddites were actually very cool. In 1811, around twenty thousand textile workers were fired from their factory jobs in Nottingham after factory owners automated their jobs. Where weaving had once been an artisanal skill, it was first reduced to atomized work in the mills, and then with the integration of shearing frames in 1811, weaving became completely unnecessary labor. Because of the adoption of shearing frames into textile mills, the portion of factory revenue that had kept 20,000 factory workers and their families alive in the form of wages was freed to be reinvested into more automated technology, creating more profit for the factory owners. A mass of now unemployed textile workers–who came to be known as the Luddites–stormed their former workplaces and destroyed the machines that destroyed their livelihoods. The Luddites saw clearly that when faced with the choice between employing human labor and using shearing frames, factory owners would always choose the less costly machines, leaving skilled workers redundant and forcing them into lesser and lesser skilled work as technology developed–meaning ever-decreasing standards of living.

Just like how the Luddites saw that the productivity brought in by the shearing machine came with hidden costs–their livelihoods–it is important to think critically about the hidden costs of the productivity of AI. I can hardly walk five feet, or scroll for five minutes, without coming across the sentiment that AI is inevitable, and we have to just roll over and integrate it into our work lest it leave us behind. I’d like to suggest though, that you do not roll over, and instead engage in Luddite-thinking about AI. What are the costs of AI integration? Are those who benefit the same as those who bear the consequences? And–if push comes to shove–do we really have to smash these machines?

Work Displacement, not Replacement

Now that we’re in the beginning of the AI boom, 41% of companies across the world have plans to replace part of their workforces with AI Automation by 2030. While AI might be thought to automate ‘menial tasks,’ the ‘work’ done by AI has to be monitored and double-checked with human labor. Remember those ‘workerless’ Amazon Just Walk Out stores that use AI to track what shoppers leave with? Amazon actually used outsourced, hyper-exploited workers in India to review 70% of AI calculated transactions. The use of AI here was not to create actually cashierless stores, but to pay poverty wages for hidden cashiers in India instead of paying slightly higher poverty wages for cashiers in the Global North, saving Amazon about a hundred thousand dollars per store each year. 

History repeats itself–first as tragedy, second as farce. We’ve seen all of this before. In recent memory, industrial manufacturing, which had been the good American job, was revolutionized by increased technology and automated, then the remaining necessary labor was outsourced to cut costs. The leftover workers in the Global North were split, some pushed into clerical or customer service work, many pushed into unemployment. Next, the internet rendered many of those jobs obsolete, and the remaining necessary labor relocated to the Global South–call centers, data entry, etc. And the leftover workers in the North were split, some into ‘fake email jobs’ or knowledge production, and many, many others into gig economy work. And now, it’s the same. Technological development of generative AI and its integration into labor is poised to semi-automate managerial and knowledge production work. USAID is already in the mix of this new economic imperialism–in January it put out a call to fund research to identify a data enrichment labor market in the Global South. Labor exploitation will increase in the Global South while American profits will soar. It’s already happening in Kenya, where workers are paid $2 an hour to train LLMS for OpenAI and Meta in ‘AI Sweatshops.” 

High Environmental Costs

It’s no secret that the AI industry is putting a strain on energy consumption. Asking ChatGPT to create an image uses as much energy as it takes to charge a smartphone. That doesn’t seem like much, and that’s true. But just like our less than stellar recycling habits or drive to work are a drop in the bucket compared to airline industry emissions, our personal use of AI models requires very little energy in comparison to the energy that it takes to train and maintain these models. For example, on average, each query sent to ChatGPT emits 4.32 grams of CO2. The average user sends 8 queries a day. Sending 8 queries a day for a year emits the same amount of carbon as driving a car for 3.5 miles. No biggie! But, there are over 100 million people who use ChatGPT every day, and 1 billion queries are placed everyday. This amounts to an extra 4762 tons of daily emissions. That’s like sending an airplane from New York to Tokyo 2381 times in a single day. 

Even when we set aside the inputs from chip manufacturing and supply chains, training an LLM consumes thousands of megawatt hours of electricity and emits hundreds of tons of carbon. The training process to build Chat GPT-3 alone consumed 1,287 megawatt hours of electricity (that could power 120 U.S. homes for a year) and generated about 552 tons of carbon dioxide. Moreover, water must be run through AI data centers to keep the machinery from overheating. Each kilowatt hour of energy consumed by a data center requires 2 liters of water for cooling. Training Chat GPT-3 used 6 cubic miles of water. The total energy cost of the AI industry is expected to exceed the energy usage of the entire country of Belgium by next year. In just the US, AI data centers are expected to raise the nation’s electricity usage by 6% by next year. 

It has become quite clear that like labor markets, the climate crisis is geographically uneven–constituting a kind of environmental imperialism. While the United States alone is responsible for 25% of historical carbon emissions, the consequences of these emissions occur elsewhere. In the left map below, the size of each country is warped to represent its portion of global historical carbon emissions. The United States, Europe, and China are the biggest emitters. The right map warps the countries according to the number of people injured, left homeless, displaced or requiring emergency assistance due to climate-related floods, droughts or extreme temperatures in a typical year. East and South Asia are undeniably most affected and most at risk. 

Image shows a map of the world. The sizes of each country are smaller or larger to reflect their contribution to historical carbon emissions.This image shows a world map. The sizes of each country are smaller or larger depending on the number of people affected by climate-related environmental catastrophes.

Toward Luddite Thinking

You might be thinking, “The future is inevitable, change happens, what can I even do about this?” To be truthful, the answer is not much. Personal consumption practices have proven to have little impact on global emissions. All of the energy statistics I listed are increasingly obsolete with every passing second. The ‘AI Cold War’ is only deepening the problem. However, if we think like a Luddite, it’s clear that there is a big difference between asking ChatGPT to generate a silly picture for personal use and accepting the integration of AI in our workplaces. While we may not be able to smash the data centers, we are in a special moment where we might refuse to cowork, collaborate, innovate, or think with AI. Refusing workplace AI is a refusal to collaborate with climate catastrophe, with hyper exploitation, with wage depression. It is nothing less than a stand for workers everywhere and for the future of the planet.

Deadline to register for GCDI Conversations in Digital Scholarship

The GC Digital Fellows are hosting the GCDI Conversations in Digital Scholarship, a seminar featuring roundtable discussions on topics related to digital scholarship and methods. These sessions will be held concurrently on May 13, 2025. All the information of the event and the registration form is available at gcdi.commons.gc.cuny.edu/2025/03/07/gcdi-conversations

Round table topics: 

  • AI for Qualitative Research
  • Digital Archives
  • Digital Methods in LOTE (Languages Other Than English)
  • Educational Game Design
  • Open Pedagogy on Manifold
  • Working with Open Government Data

Students, faculty, and staff across the Graduate Center are invited to participate.