Reading Meghan O’Gieblyn’s deeply insightful book, God, Human, Animal, Machine, has me ruminating on some questions regarding consciousness, humanity, and AI.
Here are a few thoughts, organized around the following claim: The world would be much better off if the energy, money, intellectual effort, and prestige of Silicon Valley were redirected to more useful ends.
For instance, let’s say you find questions of consciousness, self-awareness, identity, and meaning to be fascinating. The world is already filled with billons of non-human sentient creatures—animals—who, through habitat destruction, pollution, overfishing, deforestation, sprawling horizontal development, ecosystem destruction, and the general effects of climate change, not to mention animal testing, roadside zoos, and factory farming, are being catastrophically wronged by humanity.
We don’t like to talk about this. To do so is to offer another rant that can easily be categorized and put into a box—environmentalist, animal rights, vegan, hippie, and so on.
Alternatively, in the worlds of popular culture, private industry, and academia, AI is exciting, sexy, and supposedly future-oriented. Note the headline-dominating discussions, Ted talks, research grants, Silicon Valley symposia, obsequious media profiles, proliferation of robots on college campuses, and so on that indicate our (apparently) exciting future.
The focus on AI, sentience, and consciousness in big tech, from titans like Elon Musk and Peter Thiel to futurists like Ray Kurzweil, is justified because it will supposedly help us live longer and make us more intelligent through cyborg enhancements. Eventually, technological advances will perhaps even allow us to achieve immortality as we are transferred into digital form and able to live forever in cyberspace. Many people in Silicon Valley, both visible and behind the scenes, seem to genuinely subscribe to such beliefs.
These questions of immortality and identity raise their own problems, since a digital copy of myself, even if fully conscious, would not be me but a copy of me, and I would still die. Fully conscious digital copies of ourselves, if ever possible (I am skeptical), would only offer metaphorical immortality, as a being like me would exist after I die. But this is immortality in the same way that my progeny offer me immortality—they share my genes, they may share and pass down many of my experiences and ideas, and thus I live on in a way, but it is not literal immortality. Philosophers of the mind have wrestled with these fascinating questions for decades. But I digress…
Meanwhile, in the real world, humans live longer through antibiotics, cholesterol and blood pressure medication, vaccines, public sanitation, preventive medicine and universal healthcare, functioning social systems that reduce violence and address despair, and so on. Note that these actual life-saving advances, from statins to respirators, are examples of technology, broadly defined, but they are not AI. As for AI, O'Gieblyn argues that "most commercial AI systems today are designed...to perform more narrow and specific tasks: delivering food on college campuses, driving cars, auditing loan applications, cleaning up spills at grocery stores." Hardly life-saving. And yet this is where massive amounts of private capital are invested.
On the other hand, the actual life-saving medicines and practices mentioned above, under-invested in the private sphere and under constant strain in the neoliberal public sphere, are not exciting or sexy or futuristic. The same goes for reckoning with the billions of conscious beings we already share the earth with, widely known as animals.
Our treatment of animals today, both in the US and globally, is paradoxical, at once rife with care and cruelty, use of them as mere instruments and respect for them as valuable entities in their own right. Recent efforts in the US congress, like the passage of the Big Cat Public Safety Act and bipartisan efforts to discourage the use of animal testing, embody this positive aspect, as do similar or stronger bills in the EU and elsewhere. Witness as well efforts to confer rights and other legal protections on rivers, lakes, and even ecosystems. (See, for instance, this book by Joshua Gellers). Moreover, millions of people around the world treat their pet dogs and cats as equal (or near-equal) family members and companions.
At the same time, the animals killed in factory farms, tested and discarded in labs, and devastated in the wild as their ecosystems are destroyed all stand testament to a glaring failure to address these genuinely important questions regarding what we owe non-human conscious beings.
It would be far more productive if we focused on improving the ledger here, by recognizing that consciousness exists on a spectrum and that mammals and birds, at the least, possess it in significant amounts. Even reptiles and fish have some consciousness. Perhaps plants do as well, though this seems less certain. Computers and robots, meanwhile, do not. (Whether they ever will is unclear). We currently inhabit a world filled with conscious humans and animals and unconscious machines. What does recognition of this entail, both in terms of how we live our individual lives and how we organize our institutions?
These questions and similar ones that O’Gieblyn’s book so forcefully addresses are as important now as ever. If we fetishized technology less, we would see that questions of animal consciousness--how humans relate to one another and how we relate to the conscious animals that populate the ecosystems of the world--should be a guiding concern of life in the twenty-first century.
To the roboticists, computer scientists, and engineers preoccupied with AI, spare a moment to recognize that the futuristic world you envision, one rich with billions of conscious non-human life forms, already exists. And we are failing it miserably.