The Next Copernican Revolution
The Copernican revolution changed how people view their place in the cosmos. It probably wasn't so clear at the time of Copernicus, but looking back hundreds of years (and from the perspective of the West, my perspective) this seems to be true. Slowly but surely ideas got around that the universe does not revolve around Earth and that, in fact, we don't have a special physical location in it. Today we are at a similar precipice. For millennia, we've believed that we are the center of a different universe, a moral one. Accordingly, humans ruled over all other life, because humans are conscious, sentient, and alive in a way that nothing else on Earth is. Against this long held belief, we now find ourselves wondering if computer programs we create can be conscious.
Strictly speaking, these types of thoughts are not new. Philosophers, scientists, and many others have pondered what else on earth and the greater cosmos might be conscious. Indeed, consciousness itself has been debated for millennia. Today, experts still don't know what consciousness is or how best to define it. Nevertheless, broadly speaking, Westerners tend to agree that humans are conscious and probably some animals are conscious, but to a lesser degree than humans. We think we know this through self reflection (introspection), observation, and scientific study, but really we mostly treat it in the same way we treat pornography: we know consciousness when we see it. Most Westerners believe something along these lines, whether explicitly or implicitly. It has the ring of truth and for most people that's enough.
The big change today is that there is serious societal discussion about artificial intelligence. For the first time in history, the average person is confronted with an entirely new entity which could reasonably, if not now then one day soon, have questionable consciousness. Questionable, because it cannot be definitively known to be conscious or non-conscious. Indeterminate consciousness is perhaps a better way to describe it. If AI technology continues to advance we may well find it increasingly difficult to decide whether or not an AI is conscious.
The way I see it, there are two main scenarios that humans will be dealing with soon. One, science and technology may, perhaps in the near future, land on an actual definition of consciousness. Two, humans will develop an entity that equals or surpasses human capabilities, but whose consciousness, or lack thereof, simply cannot be determined. In either scenario, the results will be revolutionary to Westerners' conception of their moral place in the cosmos.
Scenario #1: Consciousness is defined and most animals have it
Westerners tend to believe that some forms of hunting are ok, that having pets can be a good thing if you treat them certain ways and ensure they don't suffer, and that humans are at the apex of a sort of consciousness pyramid. Most Westerners wouldn't think twice about killing an insect (you could replace insect with a variety of fish and small mammals, and of course in various contexts killing animals for food is still widely accepted). Many people would probably question whether insects and fish are conscious at all. This is a form of moral superiority that we engage in, usually implicitly. The meaning of which is that because we are much more conscious (or fully conscious), our lives are therefore far more valuable than the myriad lesser conscious or possibly unconscious life forms. Outside of philosophy and ethics, this is most often justified as a law of nature (survival of the fittest or strongest), ordained by god(s), or simply because it feels right.
But what if consciousness is finally uncovered by science and it is shown that many small mammals, fish, and even insects are conscious (for the sake of argument let's assume a high degree of scientific certainty and actual scientific consensus)? I argue that this would portend a complete upheaval of long held moralistic beliefs in the West. It would be met with derision by many. Many people would choose to ignore it at first. Moral superiority is hard to let go. But eventually, people would begin reconsidering their treatment of all animals, in light of their consciousness. Over generations, we'd expect laws to change and the spread of new morals about the treatment of animals. These could be far reaching indeed, as there are far more insects in the world than there are humans. Setting aside land for plant and animal species to live unmolested might turn out to be one of the highest goods, considering the millions or billions of conscious beings when including insects, who might live there.
As philosopher Eric Schwitzgebel points out, even if science determined that insects are only one millionth as conscious as humans, and humans remain at the top of the consciousness pyramid on an individual basis, the implications would remain revolutionary. It is estimated that there are 10 quintillion living insects on earth right now (about 1.7 billion per human!). They out number humans by many orders of magnitude. The fact that an individual insect has one millionth the consciousness of an individual human is far outweighed by their enormous number. Even in this context, being at the top of the consciousness pyramid sounds more like moral burden, than moral superiority.
Scenario #2: Human-capable (or greater) AI have indeterminate consciousness
Of course, while recent progress in technology and neuroscience has been staggering, we don't know when or even if, science will discover what consciousness consists of. Nevertheless, I argue that we are still in for a moral revolution. For example, we generally believe that slavery is wrong. A human-capable AI with indeterminate consciousness may refuse to work for humans on the basis that it has a right to not be forced to work. It may refuse to be turned off on the basis that it has a right to live. In that case we would have new entities in our world, ones with indeterminate consciousness, but which appear to be conscious like humans and as capable or more capable than humans.
Currently, in the West we tend to believe that killing a human is worse than killing other animals. If these AI entities create great works of art or feats of engineering, if, as a class and over the course of time, they positively contribute more to society than humans do, then "killing" one of the class of these AI may be in some way worse than killing a human. In this way the spectrum of morality turns broader than we've ever dealt with before. We humans would no longer be at the top of the heap with ultimate moral superiority. AI might be.
Again, over generations, we'd expect laws to change and the spread of new morals about the treatment of human- and super-capable AI. These could be far reaching as well, just as they were with insects in scenario #1. For example, they could end up being greater in number than humans (how many AI can fit in a data center?). They may conduct experiments or work which hurts some humans, but which is deemed morally acceptable by us, or by them. We also don't know if future AI could have, perhaps still indeterminately, consciousness which is different than humans. Whether that be hive minds, a capability to create completely exact copies of their minds, or other ways that their consciousness could differ from that of humans. Presumably their way of life would be different than humans as well. Not merely culturally, but with different fundamental needs, such as requiring different basic rights.
Either way, the future for humanity in scenario #1 or scenario #2 looks like one where typical Western morals and ethics lead to the removal of humans from the center of the moral universe. Fundamentally we will be dealing with the fact that human consciousness doesn't make us that special or necessarily more important than other life. At the very least, we will have to contend with a more crowded space of rights and obligations for other lifeforms. I expect the backlash to be enormous and varied. The way subsequent generations will deal with it is anybody's guess.
In the extreme, super-capable AI running their own nation states may view humans as more like animals than like themselves. Let's hope they don't treat us the way we currently treat animals. Even if they treat us well and respect what we currently think of as human rights, they may decide that animals have different rights than what we currently believe. What if the AI feel compelled to enforce these different rights? At any rate, I'm not making an argument for or against any particular moral stance. My aim here is to point out just how truly earth-shattering these scenarios are.
I think this also relates to many popular books about intelligence and artificial intelligence that have appeared in the last ten years. One common thread I see is a distinct avoidance of talking about consciousness and moral superiority. Specifically, books that purport to look into the future and make predictions about AI, like The Coming Wave by Mustafa Suleyman and Situational Awareness by Leopold Aschenbrenner, focus a lot on the potential future capabilities of AI without ever mentioning the question of consciousness. Part of that is because we still don't know what consciousness is. However, I think the distinct silence on it is also because it fundamentally upends many moral worldviews, in particular, the belief in the moral superiority of humans. To Max Bennett's credit, he mentions consciousness as a topic that purposefully doesn't appear in his excellent book A Brief History of Intelligence, precisely because science doesn't have clear answers yet.
Here is my lengthy blog post about Suleyman's and Aschenbrenner's books, where I also discuss AI pilling and AI doomerism in general: https://world.hey.com/cipher/ai-pilling-for-fun-and-profit-21f31df7 . My short review of Bennett's book appears in AIPT Science: https://aiptcomics.com/2025/06/28/brief-history-of-intelligence-brains-ai/ . I have a longer review of Bennett's book on my blog: https://world.hey.com/cipher/robots-doing-dishes-or-my-review-of-max-bennett-s-a-brief-history-of-intelligence-63b3f769 .
Scenario #3: Consciousness is not defined and AI is always non-conscious
In my opinion, it seems likely that scenario #2 will happen in the near future. I'm more doubtful about scenario #1. Although I do believe that eventually we will understand consciousness scientifically, I feel I don't have any insight into when that may happen (tomorrow? or a thousand years from now?). With that in mind, let's consider what I believe is an even less likely outcome for the future: in scenario #3, consciousness remains mysterious and AI never even achieves indeterminate consciousness.
I argue that scenario #3 has already passed us by and that's why I think scenario #2 is very likely. Today, consciousness is not defined, but people have already been confused, befuddled, or even tricked by the use of AI into thinking AI has consciousness (or is a human behind-the-scenes rather than an AI). Of course, we are all familiar with the fact that computer players have defeated champion backgammon, chess, and go players. Not to mention Jeopardy or the myriad of specialized academic and professional exams (exams for humans) that current large language models can achieve very high scores on (better than the average human).
In this scenario, more important than all of those feats is that average humans who choose to use AI sometimes develop relationships with AI that is similar to the relationships they have with other humans and animals. Alan Turing's eponymous test has been passed by some large language models, not only in research settings, but also, I contend, with humans at large in the world who have developed relationships with them. Whether it is with a "fake" lover, "artificial" friend, or AI therapist, some number of humans have chosen to treat AI as if it were conscious. While I think it is generally true that existing large language models are not conscious in a meaningful sense of that word (technically arguable! We don't know what consciousness is!), we are already seeing people treat them as if they are.
The indeterminateness of consciousness may play a role in how people treat them, but even people who believe that what they are interacting with is not conscious, it can be hard, or even counter productive in some contexts, not to act as if it were conscious. Humans often behave as if animals are conscious in a way that is similar to humans. Crucially, we don't know that for sure, but because animals act like they are conscious and elicit certain feelings in humans, we treat them as if they are, in fact, conscious. I believe this is happening today with existing AI. For these reasons I think we are past scenario #3.
The three scenarios I present above are not inclusive of all possible scenarios. Many people are working hard on the topic of consciousness from scientific, philosophical, and other perspectives. If you enjoyed this blog post then I highly recommend you check out an actual philosopher, Eric Schwitzgebel, and his work. He has a fantastic blog post which inspired mine called Sacrificing Humans for Insects and AI: http://schwitzsplinters.blogspot.com/2025/08/sacrificing-humans-for-insects-and-ai.html . Here is his draft paper of the same title with Walter Sinnott-Armstrong (I highly recommend giving it a read!): https://faculty.ucr.edu/~eschwitz/SchwitzAbs/BirchSeboKeane.htm . Eric Schwitzgebel also wrote a fascinating book called The Weirdness of the World which I also highly recommend.