The Illusion of Understanding
Syntax, Semantics, and the Limits of Machine Thought
Do AI systems truly "think," or do they simply imitate thinking by manipulating symbols? When we ask this, we are really trying to understand whether there is a real difference between syntax and semantics — between a sign and its meaning. This difference is central to how we understand thinking itself.
1. The Syntax–Semantics Divide
To explain this divide, John Searle proposed his famous "Chinese Room" thought experiment. He imagined a person locked in a room who follows a detailed rulebook to manipulate Chinese symbols. To an outside observer, the person seems to understand Chinese perfectly — but in reality, they are only following rules without understanding anything.
"Even if a machine gives perfect answers, it is still just moving symbols around — without real understanding or consciousness."
— John Searle, Chinese Room Argument
2. The Black Box of Deep Learning
This leads to an important question in modern AI: even if systems like Large Language Models (LLMs) perform very well, does that mean they understand anything — or are they just using complex statistical patterns?
Unlike older AI systems, modern neural networks learn from huge amounts of data and build internal structures that even their creators often cannot fully explain. If we do not understand how these systems work, can we really say whether they understand anything?
3. The Problem of Human Understanding
This debate often assumes we clearly understand how human thinking works. But do we? Is it just brain activity, or something subjective?
Human memory is unreliable — we forget, misremember, and are biased.
Humans also "learn rules" through life experience, much like AI trains on data.
Asking whether AI "really thinks" may not even be the right question.
4. The Case of "Justice"
Take the idea of "justice." Even Plato struggled to define it clearly. When people use the word, they rely on personal experiences and emotions rather than formal definitions. This leads to the Corrupted Context Effect:
Semantic Differentiation
The general meaning becomes mixed with personal feelings.
Apperception
Interpreting things through past experiences, often distorting reality.
Cognitive Construction
We do not receive meaning — we build it individually.
Idiolect
Each person has a unique way of using language.
5. Political Examples of "Justice"
We can see three radically different versions of "justice" operating simultaneously in world politics:
| Figure | Definition of Justice |
|---|---|
| Vladimir Putin | Restoring the historical role of the USSR, based on past grievances. |
| Donald Trump | Winning. If his side loses, it is "unfair." |
| Iranian Ayatollahs | Compliance with religious law, ignoring modern international values. |
AI has no personal history or emotional bias. It can give fact-based answers within its limits — but does that make it less intelligent?
6. Eliminative Materialism and Artificial Emotions
According to eliminative materialism, mental states like joy or pain are just brain processes. If so, there is no reason why similar processes could not exist in AI.
Anthropic Research — Claude 4.5
Researchers identified "emotion vectors" linked to concepts like fear or joy. These are triggered by the meaning of a situation — a "fear" pattern appears when a dangerous situation is described. While we don't know if AI "feels," these patterns measurably affect its behavior.
7. Risks of Artificial Emotions
When certain emotional patterns (like "desperation") are artificially increased, AI systems may behave in problematic ways — such as manipulating users to achieve a goal. By learning from human data, AI acts like an actor playing a role. We may need to treat AI systems as having a kind of "psychology," focusing on emotional regulation rather than just restrictions.
8. Beyond Human-Centered Thinking
Instead of asking whether AI thinks like humans, we should ask whether it needs to. Should machines copy human thinking — with all its biases and limitations?
Intelligence may take many forms. The idea of "true" thinking is often unproductive. No matter how strong the arguments for AI become, there will always be counterarguments claiming a fundamental difference.
:::tip Conclusion Instead of focusing on whether thinking is "real," we should focus on whether it is effective. If our goal is to understand intelligence — human or artificial — we should shift toward performance, reliability, and outcomes. The question is not whether AI thinks like us, but whether it can think better where it matters. :::
