No, there will Never be Sentient Computers
Part 5 of "AI and the Future of Work"
Turing Machines will never understand symbols or experience their meaning
No Turing Complete machine can achieve human consciousness, sentience, self-knowledge, or semantic understanding of what it is doing.
There is simply no path by which any of these attributes could ever be programmed into a digital computer.
Nor is there any rational basis to imagine that any of these attributes might arise spontaneously within a computer as emergent properties, no matter how intelligent and "human-like" that computer may appear when observed from the outside.
What is it like to be a Bat?
As discussed in the previous Part of this essay, a practical working definition for what it will mean to achieve Artificial General Intelligence (AGI) does not require computers to achieve consciousness or sentience.
But here in Part 5 of this essay I want to make a stronger claim: not only are consciousness, sentience, self-knowledge or semantic understanding not necessary for AGI; they are not possible within present architectures for digital computers.
All functioning digital computers ever built are "Turing Complete" machines. They all meet the definition given by Alan Turing in his 1936 paper, as further operationalized in digital (electronic) form and written up by John von Neumann in 1945 (the "von Neumann Architecture").
There is probably no chance to convince skeptics of the above statements. But to back up these up with some persuasive authority, I will very briefly discuss five papers:
(1) The Chinese Room by John Searle (1980),
(2) The Hard Problem of Consciousness by David Chalmers (1995),
(3)-(4) Two papers by Kurt Gödel: his thesis (1929), in which he first presented his Completeness Theorem, and his paper, On Formally Undecidable Propositions of Principia Mathematica And Related Systems (1931), which included his first Incompleteness Theorem, and
(5) Alan Turing's paper On Computable Numbers, with an Application to the Entschiedungsproblem (1936), in which he imagined the computer.
But first, let me put one fallacious argument to rest. It is often claimed that constructing Turing machines with sentience must be possible, because we human beings are sentient and we are Turing machines. Turing was in fact careful to define his machine in such a way that anything his machine could do, humans also can do, at least with the help of paper and pencil. I believe this is the reason Turing explicitly limited his machine's function to "finite means." So yes, human beings embody Turing machines. But that we are in part Turing machines does not in any way imply the converse: that Turing machines have the capacity to be like us in other ways.
That would be like concluding that because a tent and a cathedral are both designed to keep out the rain, a tent can do everything a cathedral can do.
(1) The Chinese Room
A person who does not speak Chinese sits in a room. Messages written in Chinese are slipped under the door. This person has a big book with two columns. In the first column is a list of Chinese-language messages, and in the second column, appropriate and convincingly human Chinese-language responses for each. When the person receives a message, he or she locates the identical message in his big book, then copies out the prepared response from the book and slips is back under the door.
From the perspective of anyone outside the room who speaks and reads Chinese, this arrangement will pass the Turing test. The Chinese speaker sends messages in and get perfectly reasonable human responses back. But the person inside the room is operating purely mechanically. They do not understand the messages they are receiving or transmitting. They do not have the foggiest idea of the meaning what they are doing.
The Chinese Room is a metaphor for how computers can appear convincingly intelligent while lacking self-knowledge and understanding, the essential objects of consciousness or sentience.
In his 1980 paper introducing the “Chinese Room” thought experiment, philosopher John Searle distinguished two potential forms of AI. "Weak AI" generates human-like responses but does not experience anything equivalent to internal human mental states; it lacks self-knowledge of what it is doing or why. By contrast, "Strong AI" would embody human-like understanding.
Searle used the Chinese Room to make it clear why passing the Turing test, or otherwise displaying convincingly human behavior, does not imply that a machine is exhibiting Strong AI. Strong AI is neither an essential nor an emergent property of a computer's human-like intelligent behavior.
The Distinction between Syntax and Semantics
The Chinese Room is an excellent illustration of the actual operation of Large Language Models (LLMs) in particular, and Turing Complete machines in general.
The ability to process a string - to compare its characters to a list of strings printed in a book, find the identical string, and send the paired message back, makes use only of the string's syntax. The person inside the room has no access to the semantic content, the meaning that can be read into the strings. He has full access to the formal alphabet, but no access to the information associated with, the strings.
A string of characters printed on a page is syntax. By contrast, human beings interpret strings by reference to ideas in our heads that are not local to the string. What we call the
"meaning" of a word or sentence is a semantic value we assign - largely arbitrarily - to the string.
Computers can only process strings at the level of syntax. LLMs process primarily text strings to generate high-probability strings in response. This processing can be immensely fast and complex, involving trillions of operations, and can simulate human-like responses to an amazing degree. But LLMs operate mechanically on strings as inputs and outputs, and there is no place in their architecture where any semantic engagement can be located.
They never understand what they are doing in the human sense of understanding meaning. There is no need for it. It would add nothing to the mechanics of successful AI models.
(2) The Hard Problem of Consciousness
The "Hard Problem of Consciousness" is a term originally coined by the philosopher David Chalmers in a 1994 talk, subsequently published in 1995 in the Journal of Consciousness Studies. In a clear parallel with the "weak AI" versus "strong AI" distinction in the Chinese Room, Chalmers talks about the "easy" and "hard" problems of consciousness. The easy problems are concerned with the function, dynamics, and structure by which the living brain integrates sense data and memories from different sources and uses it to influence behavior. It is focused on the deterministic physical processes that support cognition.
Chalmers points out that the word "consciousness" is an ambiguous term used to refer to a very wide range of phenomena:
the ability to discriminate, categorize, and react to environmental stimuli;
the integration of information by a cognitive system;
the reportability of mental states;
the ability of a system to access its own internal states;
the focus of attention;
the deliberate control of behavior;
the difference between wakefulness and sleep.
What all of the above have in common is that they concern cognitive abilities and functions. The scientific effort to explain the above phenomena by reference to mechanisms in the brain, to map brain functions to particular locations, design reproducible experiments, and test falsifiable hypotheses, are what Chalmers calls the (relatively) "easy" problems of consciousness.
All of the above - although not yet fully explained - are potentially explainable by reference to computational processes and/or neural mechanisms.
By contrast, the hard problem of consciousness is the problem of experience. Following philosopher Thomas Nagel, in his famous essay What is it Like to be a Bat? (1974), Chalmers asserts that there is "something it is like" to be a conscious organism. Chalmers describes it as follows:
Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.
Chalmers claims that even a full explanation of brain physiology and cognitive processing regarding the above phenomena will not and cannot explain subjective experience.
I agree with Chalmers that any attempts to claim that the reality of subjective experience is fully explained by reference to brain mechanisms and functions is reductve, evasive and not intellectually honest. Cognitive processing and human experience are not one and the same. This is obvious from my own experience. Experience is real, regardless of whether I am speaking of the experience of knowing I am thinking, the experience of understanding the meaning of a symbol or statement, my experience of confidence that 2+2 = 4 is a true assertion, or my experience of "qualia" - sensations such as pain or the redness of the color red.
I should here alert you that there are self-confident academic philosophers of cognition who will argue the opposite. But for me, dismissing the reality of my experience of my own inner states because I cannot take them out and show them to an interlocutor is just gaslighting with which I choose not to engage further.
Relevance of The Hard Problem of Consciousness to the AGI Debate
Chalmers is not concerned at all with computers or AI; he is focused exclusively on theories of mind and cognition for humans. But his description of the "easy problems" in cognition parallels quite closely what we know about the syntactic functioning of today's AI based on LLMs.
The "multimodal" capacity of LLMs to combine text, images, voice, video, etc. dynamically in problem-solving is recognizably:
the ability to discriminate, categorize, and react to environmental stimuli; [and] the integration of information by a cognitive system;
The ability of LLM "reasoning models" to report their own intermediate results sounds an awful lot like:
the reportability of mental states; [and] the ability of a system to access its own internal states;
The trainability of LLMs using Reinforcement Learning with Human Feedback (RLHF) so that they can apply the knowledge in their pre-processed vector space embeddings to respond to specific human queries in a form and format that is "safe" and useful is very close to:
the focus of attention; [and] the deliberate control of behavior;
Finally, we can turn LLMs off, which is not unlike:
the difference between wakefulness and sleep.
None of these LLM capacities implies the presence within a computer of what all sentient living beings share - experience. And so the "hard problem" of consciousness reappears at the boundary between what digital programming can do and what sentient beings can know.
Why does it matter? Because cognitive abilities and functions are deterministic data-processing at the level of syntax. Experience is understanding and awareness at the level of semantics. Syntax and semantics are just as distinct - and mutually uncomprehending - as the observed functions of mechanisms in the brain and subjective human experience.
One last objection - what if consciousness as an emergent property of complexity? I am skeptical. The idea that the capacity to read semantic interpretation (experience of meaning) into a string could arise spontaneously within a massive syntax-processing Turing machine begs the question of who this experiencer is and where they are located.
Expecting semantic cognition to emerge from syntactic processing is like expecting a flower to arise from rain. Although rain helps flowers to grow, on its own no amount of rain will make a flower.
The Syntax / Semantics Distinction is at the Heart of Efforts to Formalize Mathematical Logic.
(3) Completeness
The distinction between syntax - rules - and semantics - meaning - is not some late-arriving invention of AI researchers or philosophers of mind. Rather, it emerged as an essential method for developing rigorous theorems in mathematical logic.
Mathematicians start out building axiomatic system using undefined terms, choosing axioms that they believe are independent (no axiom can be derived from the others) and consistent (no axioms taken together will lead to a contradiction).
A group of late 19th and early 20th century mathematicians, led by David Hilbert (1862-1943) believed that to put mathematics on a truly solid foundation, it should be possible to derive, or prove all the theorems of mathematics, starting with the basic principles of arithmetic, from a well-chosen set of axioms, through a rigorous mechanical process stripped of any ambiguity.
This derivation or proof process would ideally be completely deterministic, and involve mechanical manipulations of rules, purely at the level of syntax, similar to what a machine could do, without reference to the potentially ambiguous or error-prone human semantic values assigned to either axioms or theorems.
Axioms exist simultaneously in two frameworks: the framework of syntax, in which they are used to derive or prove theorems using rules, and the framework of semantics, in which both the axioms and the theorems are also understood to be in some sense true.
In an influential book by Hilbert and Ackerman, Principles of Mathematical Logic (1928), the way Hilbert's challenge problem was put, was whether a given explicit set of axioms for predicate calculus (predicate calculus includes statements of the form "for all" or "there exists" and their negation) was "complete" in the sense that all statements provable by rules from the axioms are also semantically "true."
Gödel successfully proved this result as his "Completeness Theorem" the very next year, in his 1929 thesis, published in 1930.
(4) Incompleteness
However, this did not resolve Hilbert's remaining goal of proving the converse - that all true statements are also provable. Both directions - that statements are true if and only if they are provable - were needed to unify syntax and semantics. Hilbert and most others were certain this goal could eventually be achieved, but in 1931 Gödel proved the exact opposite: the existence, in any axiomatic system complex enough to include arithmetic, of statements that are semantically "true" - statements in fact an informed human would instantly recognize as true - that are not syntactically provable from the axioms of the given system.
This extraordinary result drove a fundamental conceptual divide between syntax and semantics. Gödel demonstrated that syntactic provability and semantic truth are not co-extensive in mathematical systems.
Note that even if you add the true unprovable statement to the system as a new axiom, making it "assumed to be true" - this would not solve the problem, because there would always be a new meta-statement within the modified system that was again obviously true to humans but unprovable even from the extended collection of axioms.
(5) The Entscheidungsproblem
Gödel's unexpected result still left open another very influential challenge problem laid down by Hilbert: to determine whether there exists a mechanical procedure (a "decision process") by which any statement (potential theorem) in a given axiomatic system can be proven true or false.
Alan Turing took up this challenge in a radically original way. Rather than constructing a conventional proof, he engaged in a thought experiment in which he designed in complete detail a new kind of machine that could execute all possible syntactical, mechanical procedures - operating on any finite alphabet of inputs, and programmed with all possible algorithms for all possible functions. Turing then demonstrated the existence of a class of problems - equivalent to Hilbert's well-formed statements - that no version of his machine could solve.
It is quite extraordinary that the entire computing age was ushered in by a thought experiment to design a machine so powerful and generalizable that it is meaningful to demonstrate the existence of problems such a machine can never solve.
To effectively prove Hilbert's challenge to be impossible, Turing also needed to be very careful to ensure that his machine required no understanding of what it was doing - it would operate only in a purely mechanical and deterministic manner. In other words, to prove that Hilbert's long-sought decision-process does not exist, a Turing machine must operate exclusively at the level of syntax, never semantics. What this means is that Turing Machines, by design, have no access whatsoever to the human concept of "meaning" -- they do not "understand" what they do - they simply manipulate strings in relation to other strings.
For this reason alone, any expectation that Turing Machines will achieve semantic capacity if they can only be scaled enough or trained enough flies in the face of the whole idea of what a Turing Machine actually is.
Epilogue
Any semantic interpretation of a Turing machine's inputs and outputs stands outside the machine itself, in the mind of a human mathematician who knows what the symbols of its alphabet mean.
Mathematicians cannot help but assign meaning to their axioms and theorems, even when they try to construct completely abstract and formalistic systems. Doing math involves computation and formalism at the level of syntax, but simultaneously exists semantically at the level of aesthetic, emotional, and imaginative human experience.
For those who live in its precincts, mathematics is "the Music of Reason" (Jean Dieudonne). But it is a form of reason that for some is purely abstract and aesthetic, and for others answers practical questions that arise from human sensory experience.
In either incarnation, mathematics is an activity reserved for sentient beings who experience meaning.
In this sense, computers only crunch numbers; they cannot actually "do math" at all.
I want to thank my brother, Gideon Egger
(who in addition to having a graduate degree in Physics from Courant,
studied philosophy of mind with Thomas Nagel as an undergraduate at NYU)
for reviewing this rather technical Part of my essay
and suggesting a number of improvements
which I have incorporated.



wonderful piece