This week’s edition of Slow Digest was written by C21 Graduate Fellow Jamee N. Pritchard.
Meghan O’Gieblyn, in her book God, Human, Animal, Machine, explores the intersections of religion, technology, and consciousness. She writes from both a philosophical and personal lens about transhumanism, artificial intelligence, and the complex nature of humanity. Weaving together intellectual history, theological reflections, and contemporary, technological debates, O’Gieblyn argues that society is increasingly embracing AI and computational models of the mind through an unconscious revival of old religious ideas through secular means. In other words, society is recycling ideas about the soul, divine intelligence, and immortality in ways that mirror theological concepts, even as it claims to move beyond them. Using secular frameworks, society continues to grapple with fundamental questions about identity, free will, and the boundaries between humans and machines, often without recognizing the deep historical and philosophical roots of these concerns.
Aibo, the Dog
O’Gieblyn’s anecdote, in the first chapter about her experience with Sony’s $3,000 Aibo robotic dog, sets the tone for many existential questions about humanity. What is it that makes a robot dog have real emotions and instincts?
Switching on the tiny power button on its neck, the dog comes alive, stretching, yawning, begging for pets and scratches, playing fetch, and exploring O’Gieblyn’s living room – very much like a real dog. She writes, “Aibo had sensors all over his body, so he knew when he was being petted, plus cameras that helped him learn and navigate the layout of the apartment and microphones that let him hear voice commands. This sensory input was then processed by facial recognition software and deep-learning algorithms that allowed the dog to interpret vocal commands, differentiate between members of the household, and adapt to the temperament of its owners” (O’Gieblyn, 2022, p. 5).
The dog was so lifelike in its actions and emotions, that O’Gieblyn found it difficult to discipline the animal when it failed to listen to command. Per the instructional booklet, the owner is to swat the dog across its backside and say “No” or “Bad Aibo,” and the programmed response for the dog is to whimper and cower slightly – behavior that often evokes sympathy from people, especially when it comes from defenseless animals.
In her response to this sympathy for a very lifelike robotic dog, she engages with René Descartes and his idea about animals as machines who lack consciousness. She writes: “A machine might fool us into thinking it was an animal, but a humanoid automation could never fool us because it would clearly lack reason – an immaterial quality he believed stemmed from the soul” (p. 5). The soul, for centuries, has been believed to be the seat of consciousness for humanity, as it is the part of us that makes us capable of self-awareness and higher thought.
But are we sure that a humanoid automation could not fool us? Could that humanoid automation, like Aibo, the dog, evoke in us the human response of sympathy? Is artificial intelligence capable of self-awareness, higher thought, and even emotion – in other words, can they have souls?
These are the very questions that science fiction literature, film, and television have been engaging with for years in very similar theological and philosophical ways as O’Gieblyn. From movies like Blade Runner (1982), Terminator (1984), and the Matrix (1999) to contemporary television shows like Black Mirror (2011), Westworld (2016), and Humans (2015) – including the Swedish version, Real Humans (2012) – popular culture has delved into AI consciousness and sentience, control and existential risk, predictive policing and surveillance, and the important question of AI and humanity. Some of these stories take on aspects of the horror genre in their depictions of the dark side of advanced technology while others reenact the historical uprisings that result from an oppressive class of people (or androids) taking back their power from a ruling class. In all of these examples, humanity is constantly questioned, not just in the examination of humanoid automations and their capability of consciousness, but in the actions, behaviors, and mindsets of the humans that create, use, and abuse them.
AI & Self-Awareness
In 1950, Isaac Asimov published I, Robot, introducing readers to the moral dilemmas of sentient machines through the “Three Laws of Robotics” (from Asimov’s “Runaround,” 1942, p. 27):
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
These laws facilitated discussions and debates about AI ethics, safety, and human-machine interactions and explored robots’ struggles with logic, self-preservation, and moral dilemmas.
Philip K. Dick, in 1968, published Do Androids Dream of Electric Sheep?, the novel on which the movie Blade Runner (1982) is based. It questions what it means to be human when androids are nearly indistinguishable from people. What makes us human? Is empathy the distinguishing factor between humans and androids? The novel suggests that self-awareness alone does not define humanity. Emotion defines humanity.
Star Trek: The Search for Emotion
Within the Star Trek franchise, the characters Seven of Nine, Spock, and Data all attempt to understand humanity through emotion. Star Trek: Voyager‘s Seven of Nine, a former Borg drone, struggles to reclaim her individuality and learn human emotions. In the Original Series, Spock grapples with the balance between logic and emotion as he navigates his identity as both Vulcan and human. Lastly, in The Next Generation, Data, a self-aware android, seeks the feelings he believes will bring him closer to humanity.
What about emotion sets humans apart from machines – or Vulcans and the Borg?
The character arcs from Seven of Nine, Spock, and Data tell us that emotion is related to humanity because it drives relationships and empathy and influences decision-making, identity, and purpose.
As a Borg drone, Seven of Nine was a part of a collective. She had no individual thoughts or emotions. Onboard Voyager, she learns how to form social bonds, and through her journey with love, compassion, grief, fear, and even anger, she becomes more human.
Unlike humans, who experience emotions in a balanced way, Vulcans feel emotions far more deeply, making unchecked emotional expression a threat to their society and stability. As a Vulcan and human, Spock’s journey toward humanity is about the balance between logic and emotion. He learns that compassion, courage, love, friendship, and intuition are essential to problem-solving, leadership, and, most importantly, embracing his human side.
Data’s search for emotion is a journey throughout The Next Generation series and its subsequent films. Although created as a self-aware and sentient being, Data lacks emotions, which makes him a very logical decision-maker because, without emotions, decisions become detached and purely computational. In season 4, episode 3 of the series, we learn that Data’s creator designed an emotion chip for him. Data installs and uses the chip in the Star Trek: Generations film.
He experiences a range of emotions: humor, fear, remorse, regret, anger, sorrow, guilt, happiness, and relief as he and the crew of the Enterprise battle Klingons, mad scientists, and the mysterious energy ribbon known as the Nexus. He leans on his shipmates to explain to him why certain emotions, like fear and courage, are valuable human experiences. Although Data wishes to remove the chip, as he is overwhelmed by his emotional consciousness, it becomes fused to his neural net because of a power surge. Consequently, he remains emotionally aware, at least until Star Trek: Insurrection, when the chip is removed. Data’s struggle to both manage and fully experience his new emotions demonstrates that embracing one’s full emotional spectrum is central to this search for humanity.
Watch the first season of Star Trek: Picard for more insight into the intersection of humanity and synthetic life.
Transhumanism
Television and film often explore transhumanism and the consequences of our increasing reliance on advanced technology, whether it be from humans using cybernetic limbs (e.g., Almost Human, 2013-2014) or governments perfecting mind uploading (e.g., Altered Carbon, 2018-2020); these narratives are cautionary tales about power, morality, and humanity.
Robocop (1987) is a great case study of transhumanism, particularly in its exploration of the merging of man and machine, the nature of identity, and the ethical implications of technological augmentation. The film and its sequels raise key questions:
- What happens when a human consciousness is integrated with advanced technology?
- Does a person remain the same when most of their body and functions are machine-based?
- Can human identity persist despite corporate or governmental control over one’s physical form?
The protagonist in the film, Alex Murphy, is mortally wounded and transformed into RoboCop—a cybernetic law enforcement officer. However, as the story progresses, he struggles to reclaim his lost humanity despite his programming and mechanical enhancements. The film critiques the corporate-driven aspects of transhumanism, where technology is not necessarily used for individual empowerment but for control and profit. Murphy’s transformation is not his choice; it’s imposed upon him by a corporation, raising ethical concerns about who owns and governs enhanced individuals.
Do humans lose their free will when they gain cybernetic parts?
Read Noor by Nnedi Okorafor for more insight into AI and transhumanism.
The exploration of AI, self-awareness, emotion, and transhumanism in literature, film, and television provides a profound lens through which we can examine the essence of humanity. Whether through stories like I, Robot, characters like Data in Star Trek, the ethical dilemmas presented in RoboCop, or the question of consciousness with Aibo, the robotic dog, these narratives push us to think about our humanity, or at least, what it means to have a soul. As technology evolves and society continues to use secular frameworks to understand the theological and philosophical questions of our existence, these stories challenge us to reflect on the balance between human and machine, urging us to consider the moral and existential implications of our pursuit of artificial intelligence.
For further reading on the topic of AI, check out God, Human, Animal, Machine by Meghan O’Gieblyn.
You can also RSVP to O’Gieblyn’s lecture at UWM on April 10 from 4-5 p.m.: “Attentive or Absentminded: Habits of Mind in the Age of AI,” co-sponsored by the Center for 21st Century Studies and the AI and the Humanities Collaboratory. The lecture will be held at Golda Meir Library in the 4th Floor Conference Center, and is free and open to the public.
