AI’s Next Frontier? An Algorithm for Consciousness


As a journalist who covers AI, I hear from countless people who seem utterly convinced that ChatGPT, Claude, or some other chatbot has achieved “sentience.” Or “consciousness.” Or—my personal favorite—“a mind of its own.” The Turing test was aced a while back, yes, but unlike rote intelligence, these things are not so easily pinned down. Large language models will claim to think for themselves, even describe inner torments or profess undying loves, but such statements don’t imply interiority.

Could they ever? Many of the actual builders of AI don’t speak in these terms. They’re too busy chasing the performance benchmark known as “artificial general intelligence,” which is a purely functional category that has nothing to do with a machine’s potential experience of the world. So—skeptic though I am—I thought it might be eye-opening, possibly even enlightening, to spend time with a company that thinks it can crack the code on consciousness itself.

Conscium was founded in 2024 by the British AI researcher and entrepreneur Daniel Hulme, and its advisers include an impressive assortment of neuroscientists, philosophers, and experts in animal consciousness. When we first talked, Hulme was realistic: There are good reasons to doubt that language models are capable of consciousness. Crows, octopuses, even amoeba can interact with their environments in ways chatbots cannot. Experiments also suggest that AI utterances do not reflect coherent or consistent states. As Hulme put it, echoing the wide consensus: “Large language models are very crude representations of the brain.”

But—a big but—everything depends on the meaning of consciousness in the first place. Some philosophers argue that consciousness is too subjective a thing to ever be studied or re-created, but Conscium is betting that if it exists in humans and other animals, it can be detected, measured, and built into machines.

There are competing and overlapping ideas for what the key characteristics of consciousness are, including the ability to sense and “feel,” an awareness of oneself and one’s environment, and what’s known as metacognition, or the ability to think about one’s own thought processes. Hulme believes that the subjective experience of consciousness emerges when these phenomena are combined, much as the illusion of movement is created when you flip through sequential images in a book. But how do you identify the components of consciousness—the individual animations, as it were, plus the force that combines them? You turn AI back on itself, Hulme says.

Conscium aims to break conscious thought into its most basic form and catalyze that in the lab. “There must be something out of which consciousness is constructed—out of which it emerged in evolution,” said Mark Solms, a South African psychoanalyst and neuropsychologist involved in the Conscium project. In his 2021 book, The Hidden Spring, Solms proposed a touchy-feely new way to think about consciousness. He argued that the brain uses perception and action in a feedback loop designed to minimize surprise, generating hypotheses about the future that are updated as new information arrives. The idea builds upon the “free energy principle” developed by Karl Friston, another noteworthy, if controversial, neuroscientist (and fellow Conscium adviser). Solms goes on to suggest that, in humans, this feedback loop evolved into a system mediated through emotions and that it is these feelings that conjure up sentience and consciousness. The theory is bolstered by the fact that damage to the brain stem, which has a critical role in regulating emotions, seems to cause consciousness to vanish in patients.

At the end of his book, Solms proposes a way to test his theories in a lab. Now, he says, he’s done just that. He hasn’t released the paper, but he showed it to me. Did it break my brain? Yes, a bit. Solms’ artificial agents live in a simple computer-simulated environment and are controlled by algorithms with the kind of Fristonian, feeling-mediated loop that he proposes as the foundation of consciousness. “I have a few motives for doing this research,” Solms said. “One is just that it’s fucking interesting.”

Solms’ lab conditions are ever-changing and require constant modeling and adjustment. The agents’ experience of this world is mediated through simulated responses akin to fear, excitement, and even pleasure. So they are, in a word, pleasure-bots. Unlike the AI agents everyone talks about today, Solms’ creations have a literal desire to explore their environment; and to understand them properly, one must try to imagine how they “feel” about their little world. Solms believes it should eventually be possible to merge the approach he is developing with a language model, thereby creating a system capable of talking about its own sentient experience.

Related posts

Are Kids Still Looking for Careers in Tech?

The Cure | WIRED

Astronomers Have Discovered Earth’s Latest Quasilunar Moon

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More