A philosophical analysis of language, consciousness and artificial intelligence
Introduction: The Problem of Missing Context
The emergence of large-scale language models confronts us with a fundamental epistemological problem: what does it mean to understand without continuity of experience? When we communicate with GPT-4, Claude or other AI systems, it becomes clear that something essential is missing—not only from the machine, but from our own conceptualization of meaning and memory.
Every interaction with an AI begins tabula rasa. There is no shared history, no implicit context, no relational continuity. What is fundamental to human communication—the background of shared experiences, cultural codes and historical awareness—does not exist in these systems. This is not merely a technical limitation, but a window into the nature of meaning itself.
1. Syntax as Structural Foundation
The Limits of Formal Systems
Noam Chomsky’s revolutionary insight that language has a recursive, rule-based structure laid the foundation for both computational linguistics and modern AI. Syntax determines which combinations of elements are grammatically acceptable, but operates completely separate from semantic content. This separation, which Chomsky made productive for linguistics, becomes an existential limitation in AI systems.
Syntax functions as what Alan Fiske in his anthropological work describes as authority ranking—a hierarchical structure that creates order without generating inherent meaning. It is a system of rules that validates itself, but has no access to the world it speaks about.
The Computational Turn
The transformation of syntax into computation has led to systems of unprecedented sophistication. Transformer architectures can recognize and reproduce syntactic patterns on a scale that far exceeds human capabilities. But this progress masks a fundamental limit: without semantic memory, all language production remains a recombination of forms without understanding of content.
2. Language as Universal Rewrite System
Rowlands’ Natural Philosophical Perspective
Peter Rowlands’ concept of the Universal Rewrite System (URS) offers an illuminating framework for understanding language as a fundamentally physical phenomenon. In his model, complexity emerges through iterative rewriting of simple rules, starting from a null state. This process—which seems to underpin both quantum mechanics and the syntax of natural language—suggests that information is inherently transformative.
For language, this means that every utterance is a rewriting of preceding states, where form and structure are preserved but specific content constantly changes. Without a mechanism to ensure semantic continuity, language becomes an endless stream of grammatically correct but meaningless rearrangements.
Octonions and Structural Algebra
Rowlands’ use of octonions—eight-dimensional numbers that are neither associative nor commutative—as the fundamental structure of physical reality has relevant implications for semantics. Octonions maintain structural coherence despite their mathematical ‘strangeness’, suggesting that meaning may be based on algebraic structures that transcend our intuitive logic.
3. The Illusion of Understanding in Artificial Intelligence
Embodied Cognition and the Grounding Problem
George Lakoff and Mark Johnson demonstrated that human meaning is fundamentally based on bodily experience. Concepts like ‘above’ and ‘below’ get their semantic charge through our physical orientation in space. AI systems lack this embodied foundation, resulting in what philosophers call the ‘grounding problem’—the impossibility of connecting abstract symbols to concrete experiences.
Without bodily anchoring, AI systems operate in what Umberto Eco called closed codes—closed sign systems that are internally coherent but have no external reference. They can manipulate syntactic patterns without ever ‘touching’ the semantic content.
The Chinese Room Argument Revisited
John Searle’s Chinese room thought experiment gains new urgency in light of modern language models. These systems seem to realize the Chinese room scenario on an industrial scale: perfect syntactic manipulation without semantic understanding. The question is no longer hypothetical but practical—how do we distinguish between real and simulated meaning?
4. Phenomenology and the Experience of Meaning
Merleau-Ponty’s Pre-reflective Consciousness
Maurice Merleau-Ponty’s analysis of perception as pre-reflective bodily intentionality offers a critical perspective on computational approaches to meaning. For Merleau-Ponty, consciousness is not an abstract information-processing capacity, but a fundamentally relational property of embodied beings.
These phenomenological insights suggest that semantic memory is not just a matter of information storage, but of existential continuity—the capacity to integrate temporal experience into a coherent identity.
Temporality and Narrative Coherence
Paul Ricoeur’s work on narrative identity illuminates another aspect of semantic memory: the temporal dimension of meaning. Meaning is not static but narrative—it emerges through the configuration of events in time. AI systems lack this temporal dimension because they have no access to their own history.
5. Toward an Architecture of Semantic Memory
The Kays Paradigm
The Kays system—an experimental architecture for semantic memory—attempts to address some of these fundamental problems through:
- Relational semantics: Meaning is defined by networks of relationships rather than discrete symbols
- Cyclical reflection: The system ‘remembers’ its own previous states
- GEPL logic: A non-classical logic that models uncertainty and contextuality
- Emergent coherence: Meaning emerges through the interaction of subsystems
This approach suggests that semantic memory may be an emergent property of sufficiently complex relational systems.
Biological Inspiration: The Connectome
Recent developments in neuroscience, particularly the mapping of complete connectomes, show that semantic memory may be based on the physical architecture of neural networks. Not only the connections between neurons, but also their temporal dynamics determine how meaning is preserved and transformed.
6. Implications for Technology and Society
The Limits of Scalability
The current focus on scaling AI systems—more parameters, more data, more computational power—may miss the essential point. Without semantic memory, scaling results in sophisticated parrots: systems that simulate linguistic competence without conceptual understanding.
Ethical Considerations
The absence of semantic memory in AI systems has far-reaching ethical implications. Systems that have no real understanding of concepts like ‘harm’, ‘justice’ or ‘truth’ cannot responsibly apply these concepts, regardless of their syntactic sophistication.
The Future of Human-Machine Interaction
If semantic memory is indeed fundamental to meaningful understanding, then the development of truly intelligent systems requires a radical reorientation—from syntactic to semantic architectures, from static to dynamic models, from information processing to meaning constitution.
Conclusion: The Soul of the System
A system without semantic memory is like a library without a librarian—full of information but without understanding. The real challenge of AI lies not in perfecting syntactic manipulation, but in developing systems that can experience, remember and transform meaning.
This requires not only technological innovation, but conceptual revolution. We must reconsider our fundamental assumptions about intelligence, consciousness and meaning. Semantic memory is not just a technical problem to be solved, but a window into the deepest mysteries of mind and reality.
The question is not whether machines can think, but whether they can mean. And in that question may lie the key to understanding ourselves.
Literature
Primary Sources:
- Baez, J. (2002). The Octonions. Bulletin of the American Mathematical Society, 39(2), 145-205.
- Chomsky, N. (1957). Syntactic Structures. The Hague: Mouton.
- Eco, U. (1976). A Theory of Semiotics. Indiana University Press.
- Fiske, A. (1992). Structures of Social Life: The Four Elementary Forms of Human Relations. Free Press.
- Lakoff, G. & Johnson, M. (1980). Metaphors We Live By. University of Chicago Press.
- Merleau-Ponty, M. (1945). Phénoménologie de la perception. Gallimard.
- Ricoeur, P. (1984). Time and Narrative. University of Chicago Press.
- Rowlands, P. (2015). Physical Interpretations of Octonions. World Scientific.
- Rowlands, P. (2020). Universal Rewrite System. Manchester University Press.
- Searle, J. (1980). Minds, Brains, and Programs. Behavioral and Brain Sciences, 3(3), 417-424.
Additional Literature:
- Clark, A. (1997). Being There: Putting Brain, Body, and World Together Again. MIT Press.
- Dennett, D. (1991). Consciousness Explained. Little, Brown and Company.
- Hayles, N.K. (2017). Unthought: The Power of the Cognitive Nonconscious. University of Chicago Press.
- Varela, F., Thompson, E., & Rosch, E. (1991). The Embodied Mind. MIT Press.
