🔣🧩💡 Symbols Are All You Need
What the hell a symbol is, and why that matters for artificial intelligence.
This article is a summary of Symbols Are All You Need, a publication I will release soon as a bridge between Part 1 and Part 2 of the metaframework called MetaOntdy (still under construction).
Abstract
This essay explores the role of symbols as fundamental operators of cognition and culture. From myths and fictions to science and artificial intelligence, symbols appear as compressions of reality—lossy polygons that make the world habitable for thought. Building on the Symbols Are All You Need framework and the MetaOntdy project, the text proposes a taxonomy of artificial intelligence: AGI 0.25 (current LLMs, operating with borrowed symbols), AGI 0.5 (systems equipped with a transferred symbolic dictionary), and AGI 1.0 (a utopian future of machines that construct their own symbols through direct experience). Rather than offering definitive answers, the essay leaves an open question: can a cognitive entity escape the space in which it was created, and develop genuine symbolic resilience beyond borrowed maps of reality?Context
A few weeks ago, I published a rather unusual article. It was about a plant that could move. You can read it here:
https://angel-bayona.blogspot.com/2026/03/the-metaontdy-lens-internal-mechanics.html
Of course, it wasn’t a real plant. It was a thought experiment: what if we designed an organism capable of alternating between vegetative mode (photosynthesis, fixed roots) and animal mode (locomotion, active migration)? I called it “The Mobile Vegetalt”—a speculative construct with myelinated roots functioning as nerves, a fixed trunk serving as a skeleton, and flexible branches acting as hands.
The experiment wasn’t about predicting a possible organism. It was about pushing the concept of specialization. The underlying question was:
Is specialization a destiny or a state?
Can a system have multiple modes and alternate between them depending on environmental pressure?
The unexpected turn
That exploration led me down strange paths: from plant immunity to endosymbiosis, from LUCA to the mixotrophy of certain protozoa. And in the end, the impossible plant left me with a question more unsettling than all the others:
What does it take for a system to decide to move?
In the article, I formalized that question with an equation. I defined variables: local irradiance, metabolic cost, energy reserves, opportunity gradient. I built a function Ψ that determined the migration threshold. And in that equation, a term appeared—σ, which I called hysteresis: a buffer that prevents the system from oscillating every time a cloud passes by.
From parameter to agent
Hysteresis wasn’t just a technical parameter. It was the point where the system stopped being an automaton and began to act as an agent.
The moment where immediate reaction turned into deferred decision.
The threshold where noise became signal.
Without intending it, The Mobile Vegetalt had led me to the edge of something much larger:
How does a system decide what deserves a response and what deserves indifference?
So...
👉 Here I explore how symbols become the key to understanding that transition—and why they are essential for artificial intelligence.
The crystal
It wasn’t a dramatic revelation.
It was a conversation, about six months ago, with an AI I often use to explore these questions. We were talking about human resilience—how people survive crises, losses, world-shifts. And at some point, she said something about narratives and symbols. It wasn’t new, but that time it landed differently. It clicked.
From then on, I began to see everything with new eyes, and my admiration for human civilization took on a deeper perspective.
Symbols weren’t labels. They weren’t just names we attach to things. They were compression operations.
Like a JPEG file: you gain manageability at the cost of losing resolution. The symbol fire isn’t fire. It’s a compression of thousands of thermal, visual, dangerous, useful, sacred, destructive experiences—condensed into a unit that enables fast action.
That loss isn’t a flaw. It’s the very condition of possibility for cognition.
An entity that processed every instance of reality without symbolic compression wouldn’t be more precise—it would be paralyzed. Lossy compression is what makes it possible to act in real time within a world that always exceeds our processing capacity.
And The Mobile Vegetalt, with its hysteresis and thresholds, was a particular case of something universal: every cognitive entity lives by imperfect approximations. Humans, animals, and—if they ever arrive—artificial general intelligences.
The question was no longer how a plant decides to move. The question was how any system decides what deserves a response and what deserves indifference. And the answer, increasingly clear, was: through symbols.
The Leap
Once you see the symbol as a first-order operator, you start seeing it everywhere.
Myths aren’t false stories about the origin of the world. They are cohesion technologies—mechanisms that organize identities, distribute roles, and reduce social friction. Their value lies not in factual truth, but in their capacity to make uncertainty habitable.
Fiction isn’t deception. It’s a low-cost laboratory where we test models of social agency without paying the price of living them.
Ideologies aren’t descriptions of reality. They are polygons with few sides that cover vast territory. Resilient ideologies know they are approximations; fragile ones mistake themselves for perfect circles and collapse when reality disproves them.
Even science—the most rigorous form of knowledge we have—operates with the same underlying logic. Mathematical models are polygons inscribed in circles: they get closer, refine, approximate, but never achieve perfect correspondence. And that’s not a limitation—it’s what makes them useful. A model that captured reality in all its complexity would be indistinguishable from reality itself, and therefore equally unmanageable.
The virtue of the polygon isn’t its perfection. It’s its calculable sufficiency.
The Gap
All of this converges on an old problem—one philosophy has been chewing on for millennia without quite swallowing: the gap between ontology and epistemology. Between what is and what is known.
Part 1 of MetaOntdy—the metaframework I’m developing—left that gap well-posed: they are different orders, and no cognitive system can collapse them completely. But what the Mobile Vegetalt and the obsession with symbols gradually revealed is that the gap doesn’t need to be closed.
It needs to be made habitable.
And the symbol is the operator that makes it habitable.
It’s not a bridge across the abyss. It’s an interface that allows us to live productively with it. The symbol isn’t reality, but it constructs a domain of work—a region where the real has sufficient form to be manipulated, combined, transmitted, refined. That domain is always a lossy compression. Always a polygon. But it is the only space where cognition can occur.
Ontology remains on the side of what the symbol compresses without fully capturing. Epistemology remains on the side of the operations we perform within the symbolic domain. And the symbol itself—with its fixed core and adjustable periphery, with its history of use and its capacity for revision—is precisely the pivot between the two.
The Machine
Once you grasp this, you start looking at artificial intelligence with different eyes.
Today’s large language models are extraordinary. They process immense amounts of text, generate astonishing responses, and simulate understanding with a fluency that can feel uncanny. Yet, through the lens of the Symbols Are All You Need thesis, their defining strength also reveals their limitation:
They operate with borrowed symbols.
Their symbolic repertoire (Σ_H) is not the result of direct interaction with the world. It is transferred from human corpora. They lack anchoring in first-order experience (L_0); instead, they rely on second-order descriptions—dictionaries, examples, definitions. Their ability to revise symbols (the function (ρ) exists, but it is not agentive. They do not decide when a symbol has become too rigid; that decision is imposed externally, through fine-tuning or RLHF.
This is what I call AGI 0.25.
To be precise: LLMs, as we know them today, would only qualify as AGI 0.5 if we could transfer into them a structured symbolic repertoire derived from human corpora. That transfer is their defining condition. But here lies the deeper challenge: is such a transfer computationally feasible within current architectures? A true symbolic dictionary—something akin to an Oxford Advanced Learner’s Dictionary, but expanded to multimodal concepts, relations, and thresholds—would be vast. Hosting and operationalizing it inside today’s LLMs may be computationally unmanageable. It is possible that a different architecture will be required, one designed not just for statistical scaling but for symbolic anchoring and revision.
AGI 0.5, then, is not a “lesser” version of some future real AGI. It is a different type of cognitive entity—with its own architecture, limits, and relationship to the world. Within domains already mapped densely by human language, it can be immensely powerful. But it faces an architectural ceiling: scaling it with more parameters, more data, or more statistical refinement does not alter the symbolic plane it inhabits.
AGI 1.0 (real AGI) would be something else entirely. A system that constructs its own symbols through direct interaction with the world. That has experiential history in (L_0). That develops an agentive (\rho) function: the capacity to detect when a symbol is crystallizing and to deliberately re-open its periphery. Such a system wouldn’t need borrowed symbols; it would have its own, anchored in its own trajectory of interaction with reality.
The difference isn’t one of degree. It’s one of symbolic order.
The Question That Won’t Close
And here we arrive at the end—which isn’t really an end, but a ramp.
Everything above leaves an open question, and it’s the question Part 2 of MetaOntdy will have to grapple with:
Can a cognitive entity develop genuine symbolic resilience without having constructed its (L_0) through direct experience?
Or, put differently: can a cognitive entity escape the space in which it was created? (the eternal tension between gods and their creatures, between creators and their creations). Personally, I suspect the answer may be yes—there are representative cases that come to mind—but that evaluation belongs to the next stage. For now, I remain focused on closing Part 1 and refining its details.
In other words: is experiential grounding a necessary condition, or merely one sufficient condition among others? Can a system with borrowed symbols—like current LLMs—evolve into something qualitatively different, or is its ceiling absolute?
I don’t know. But the framework for asking the question is now built.
And it all began with a plant that doesn’t exist, an equation with hysteresis, and a conversation six months ago where someone said something about narratives and symbols that, without knowing it, handed me the key.
Final Note
This article is the narrative version of something denser. For those who want the mathematical formalization—with precise definitions, theorems, and categorical apparatus—I will soon publish a technical document. If you’re curious before then, feel free to reach out.