This week’s edition of Slow Digest is written by C21 Graduate Fellow Jamee N. Pritchard.
Last month, I attended a Slow AI workshop sponsored by C21’s Human Club that centered on the practice of prompting — the everyday act of giving language-based instructions to systems like ChatGPT, Copilot, Gemini, and Claude to produce a desired response or action. Instead of machines responding to us, however, we became the ones prompted by the facilitators. Through freewriting as “human prompting,” lexical-connecting exercises, and comparative machine prompting, we enacted the same linguistic and associative processes we typically train AI systems to do.
The experience was particularly refreshing after months of dissertation writing. One simple prompt, “railroads,” sparked vivid memory work and creative experimentation: an Amtrak trip with my brother from Chicago to Los Angeles, then a fictional narrative about a train ride to Mars. It reminded me how much meaning, memory, and emotion humans embed in language, aspects machines might simulate, but can never truly feel or recall.
The workshop also brought into focus the growing stigma academia places on generative AI, often framing it as a threat to human intelligence and creativity. As an educator, I understand the frustration when students try to outsource their thinking to AI. And as a writer, I feel the anxiety too. I have even found myself avoiding my beloved em dash for fear that a polished sentence will be read as “too AI.” But is this panic fully justified or just another historical iteration of humans fearing technological advancement?
N. Katherine Hayles (2012) argues that humans and technics have long been co-evolving in a process she calls technogenesis. She reminds us that technological change “offers no guarantees” of positive progress (p. 81) as it expands how we read, write, and think. In an interview for an upcoming episode of 6.5 Minutes With…C21, media artist Nathaniel Stern extends this argument into our current AI moment. His work captures how humans and technology have co-evolved over time, the cultural, aesthetic, political, and economic impacts of us on technology and vice versa. Stern suggests that the real issue isn’t AI at all, but the social and economic structures in which AI is embedded:
“AI isn’t the problem, late capitalism is. If we had meaningful social support, retraining programs, and universal healthcare, this wouldn’t feel like a crisis. We’ve panicked about new technologies before: photography was supposed to kill painting, desktop publishing was supposed to destroy creative work. Instead, they sparked new fields and new forms of art. We should approach AI the same way and leverage what it does well, let go of what it doesn’t, and rethink what’s possible. Technology expands our imagination, but it demands we imagine first.”
Stern’s point reminds us that the real question isn’t whether AI will replace human creativity, but whether we will continue to cultivate the imaginative capacities that make human creativity possible in the first place. That is precisely where my own work lives: Black girls’ radical imagination, envisioning futures that refuse erasure, and teaching us — adults, institutions, and now machines — how to imagine more expansive lives. And if we are the ones teaching AI how to think and create, we must also ask: whose imaginations are shaping these systems, and whose visions of the future risk being automated out?
Many people worry that AI will take our jobs and our purpose, but I’m more interested in what this moment reveals about us. What do we, as humans, believe counts as intelligence, creativity, and humanity? What are the things that truly make us human that we can do that machines cannot? And, simultaneously, how might these AI systems relieve some of the cognitive and administrative burdens demanded by a society obsessed with productivity?
Safiya Noble (2018) cautions that our technologies do not emerge neutral; they are built within existing systems of oppression and inherit the biases of the societies that produce them. Likewise, Ruha Benjamin (2019) warns that if we aren’t intentional, we will automate inequality rather than dismantle it. So the question is not simply what AI can do, but who AI is designed to serve, and whose ways of knowing might be excluded or overwritten in the process? The practice of imagining otherwise, then, is methodology. It is a tool for liberation. It is also training data, and it matters whose data gets to shape our machines.
Generative AI is not going anywhere. So why not teach students — and one another — how to use these tools well, as collaborators rather than shortcuts. Let’s cultivate practices that protect human curiosity, play, memory, and wonder. AI cannot replace human thought and creativity, but it can support and stretch them if we choose to design and deploy it with care.
This is a turning point for humans. A moment in history where we must decide what we refuse to automate, the parts of being human that no machine can ever take: empathy, embodied experience, lived memory, and imagination as freedom. If the future is already arriving in the form of our machines, then we owe it to ourselves to slow down and ensure we’re building a future worthy of our humanity.
References
Benjamin, R. (2019). Race after technology: Abolitionist tools for the New Jim Code / Ruha Benjamin. Polity. https://ebookcentral.proquest.com/lib/uwsau/detail.action?docID=5820427
Hayles, N. K. (2012). How we think: Digital media and contemporary technogenesis. The University of Chicago Press.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press. https://nyupress.org/books/9781479837243/
Upcoming Events
Don’t miss the new exhibition by Nathaniel Stern in collaboration with Sasha Stiles, titled Generation to Generation: Conversing with Kindred Technologies, coming to the Kenilworth Square East Gallery at UW-Milwaukee from February 12–20, 2026. The show and its related programming (opening reception, panel discussion, and workshops) promise an immersive investigation of how humans and machines evolve and co-create together. This exhibition is a part of C21’s Aesthetics, Art, & AI programming in collaboration with UW-Madison’s Center for the Humanities.
