New AI no longer behaves merely as a tool. Systems like Claude plan, adapt, and act. With that, the question has once again arisen: “Is the machine on the verge of becoming conscious?”
There is just one problem with this recurring line of speculation: no science has yet managed to determine where consciousness begins—or what it actually is. From a mollusc to a chicken to an ape, there is only a gradual transition, a continuum of increasingly complex sensation and experience. In the English language, there is a whole spectrum of terms that attempt to capture different levels of consciousness in biological beings: sentience, agency, awareness, and consciousness, to name a few. But these concepts overlap and often lack consensus.
So when we do not agree on (or even understand) the terms and concepts, how can we possibly hope to place AI on the scale? Are some AI systems already a “higher being” than a chicken because they can reason, or will technology always—under all circumstances—be “lower” because it lacks biological senses and neuro chemicals?
In this conversation, at the intersection of biology and technology, we attempt to clarify the concepts and become wiser about which ethical and philosophical discussions make sense as AI systems grow ever more powerful.
Melanie Challenger is a British writer and researcher, particularly concerned with the relationship between humans and other species. Among her recent publications are "How to Be Animal: What It Means to Be Human" (2021), "Animal Dignity: Philosophical Reflections on Non-human Existence" (2023), and "Alive: The Hidden Intelligence of the Living World" (2026).
Inga Strümke is one of the most prominent voices in the conversation on artificial intelligence in Norway. She is an associate professor at the Norwegian Open AI Lab at NTNU, and the author of "Maskiner som tenker" (Machines That Think) (2023).