One Map Hypothesis
How a single shared map keeps language physical, minds aligned, and dualism at bay
Language is a pretty interesting invention. If it were removed from the universe and I was shown a world of apes going about their lives, I’d never see it coming. Maybe sounds for individual objects or dangers. I could even see how math would emerge and composite into complex equations, but language? Surely those same kinds of utterances for entities or events could not be strung into non‑commutative sequences. And there’s no way those sequences could be parsed rapidly by beings—no matter how intelligent.
Even once you know language exists, it still feels slippery. You say a few words and somehow the other person reconstructs something far more detailed than the sounds or scrawls you produced. There is a temptation to treat this as evidence that words live in their own realm, parallel to the physical one. We talk as if there is a space of “meanings” separate from atoms, and text navigates that space. My claim here is simpler and much more stubborn. There is no second realm. There is just an underlying physical world, and one canonical way to organize information about it.
Call that picture the “one map hypothesis” after a nebulous claim that predates the more rigorous, formalized, and thoughtfully constrained Platonic Representation Hypothesis. PRH says that as neural networks grow larger and more competent, their internal representations converge toward a shared statistical model of reality, even when they are trained on different data or tasks. The one map hypothesis takes that convergence seriously, extends it beyond current architectures to any mind under evolutionary or resource constraints, and refuses to treat the shared structure as anything other than a compressed description of physical regularities.
It goes further in two directions. First, human, animal, and even alien minds would also converge toward the same architecture because all are constrained by a shared universe. Any replicating being that must build models of the world to fast-forward would necessarily discover game theory, conflict in a universe of fixed resources. Second, this shared structure is not an abstract realm floating above physics, but a compact description of ultimately physical regularities. Falsifying either part forces you into a strange position where different minds occupy non‑overlapping intersubjective realities that begin to look a lot like a polite form of dualism.
Text as physical bins
Underneath the romance, any sentence you write is, in principle, talking about configurations of matter and energy changing over time. You can model this in a deliberately boring way. Imagine a big evolving database of “where every particle is and what fields look like” at each tick. Now define words as bins over that database plus operators that connect those bins across time.
A word like “tree” is a fuzzy predicate over space–time histories. If a region has cellulose, branches, certain growth patterns and so on, you tag it as a tree. Each of those constraints can be pushed down another level if you are patient. Verbs and tenses introduce operators that relate different snapshots. “Fell” connects one configuration to a later one in which the tree’s coordinates and orientation have changed. Time is nothing more than another coordinate that bins and operators reach across.
This sounds obvious when you stay close to concrete nouns. The interesting part is that you can keep pushing this reduction all the way up to things we usually treat as purely mental or social. You can argue that money exists “only in our heads”—that it is a shared belief or a story. That is a description of how we maintain it, not what it is. If you were patient enough and had the full set of operators—union, intersect, join—you could track the atoms and bits involved as dollar bills move through drawers, digital balances change in databases, and legal records update. You could write down rules that anticipate new exchanges. If you did that thoroughly enough, there would be no ambiguity left. You would have a giant bin over physical trajectories plus transition rules. It would be horrible to work with, but it would be entirely deterministic.
You can do the same with arbitrarily abstract ideas like good taste. It seems difficult, but it can be done in a way that preserves the full abstractness of “taste” across domains, whether music or architecture or code. Start with a computable definition. Taste is the ability to measure and stratify exerted effort multiplied by a creator’s specialized mastery by only inspecting the output. “Mastery” is subjective until you define it as a recursively relaxed value, like PageRank, over mutual evaluations of creators within the medium, excluding consumer input. None of this is ethereal. These are all measurable quantities and call back to a physical realm, even if in the end you are counting neurotransmitters.
This becomes unbelievably complex very quickly. If you actually tried to convert an ordinary sentence into the full set‑theoretic mess over bins of atoms and cross‑time operators that it implies, the result would be unreadable. The average tweet would expand into something that looks like it escaped from the back of a model theory textbook. We run those expansions multiple times a second without noticing. It is not surprising our brains devote a lot of hardware to this. Remapping a tiny fraction of that circuitry would’ve sailed me through Math 300.
A second realm
Once you appreciate how bad the full reduction is, it is obvious why almost no one thinks this way. Treating money, taste, justice, or identity as gigantic bins over particle histories with temporal operators is technically accurate and completely unusable. We found a shortcut.
Instead of constantly unpacking all that, we pretend these complicated bins are new atoms that live in a simpler, higher‑level realm. We stop thinking about dollars as printers, fibers, strips of metal, and database tables, and start thinking of them as “a unit of value.” Jealousy is not a complex cascade of neurochemical and social cues monitoring the risk that someone else’s genetic material will displace yours. It is a new, atomic feeling. We build a new graph whose nodes are these abstractions and whose edges are relations like causes, resembles, or opposes.
That graph becomes thick enough that its nodes lean on each other, like a repair scaffolding that remains standing after its host building collapses. At that point you can do almost all of your reasoning inside it. You rarely feel the need to reach back down into chemistry or biomechanics. From the inside, it looks like this “space of meanings” is its own thing. Philosophers are then tempted to say that language, or thought, or intentionality lives partly in this separate domain.
I do not think that is what is happening. We are just tired. The combinatorics of the physical layer get unmanageable very quickly, so we instantiate an abstraction layer where you are allowed to treat massively complicated bins as if they were simple points. The abstraction is real in the same sense an interface is real, but it does not add a new kind of stuff to the universe. It is a complexity management move, not a second ontology.
Teleporter world
A simple way to see how you could mistake an abstraction for an unreal space is to imagine a world where everyone’s user interface to reality is lying to them in the same way. Picture a future where each city is enclosed, like a silo. No one sees beyond the walls. The parent civilization that built it all is gone. Global geography has been forgotten.
Movement between cities happens through teleporters. You walk into a chamber in one city, push a button with another city’s name, and step out of a chamber there. There is no sense of distance or direction. From the user’s perspective, teleporting from Denver to Tokyo is as simple and uniform as taking an elevator between floors.
In that world, it is natural to think cities do not have meaningful relative locations. They are just nodes in a connection graph. Asking “what is the distance between Denver and Tokyo?” becomes like asking “what is the distance between two URLs?” The paradigm does not admit the question.
Someone inside builds an LLM. People type in statements about their world. Charleston and Savannah have similar architecture. Los Angeles is much drier than New York. Oslo is colder than London. No one is trying to describe geography. They are just talking about weather, food, language, and architecture.
The machine’s job is to predict text. It builds internal representations that make that easier. If you look inside and try to visualize the structure it has inferred over city names, you find something interesting. The best low-dimensional embedding of these relations lies on the surface of a sphere. The city vectors line up in positions that look a lot like latitude and longitude.
“Interesting,” the inhabitants remark, “our cities can be related in some sort of hypothetical latent space based on their features.” They take “physical reality” to be a stack of city-realms you can hop between, and “conceptual space” to be this hypothetical map based on similarities. “Quite an interesting sphere map your language machine cooked up. Obviously it is not ‘real’ in any sense of the word, just as a line representing rigor and going from mathematics to sociology has no physical reality.”
They have it backwards! The map the model discovered is real in a stronger sense than their everyday picture. They may have no direct access to global geography, but the sphere it uncovered represents an underlying physical constraint upstream of all the relational statements they fed it. The overlapping “city realms” are the illusion, a side effect of the transport interface.
When physicists say spacetime might not be fundamental, they are making the same kind of move. In some approaches, the base layer is a discrete graph of events and causal links rather than a smooth manifold. The familiar geometry only appears when you ask which events count as neighbors and solve for an embedding that makes those links as simple as possible. “Space” is just the lowest-energy way to draw that graph and, from a distance, passes for continuous.
Language models live in a similar situation. They never see quarks or rocks or people directly. They see text. Those data streams are as indirect and lossy as relations about weather and culture in the teleporter world. But if the underlying reality has a consistent structure, and if those streams are constrained by that structure, a model trained to compress and predict them will drift toward some shared geometry that reflects it. When we visualize embedding spaces and see stable neighborhoods and axes, we are looking at the machine’s version of the sphere.
Perhaps this “shape of the world,” as distorted through the view of any evolved organism, is as concrete and universal as a Platonic solid or a sporadic group. It would be a strange kind of late discovery, but one that any technical civilization anywhere would eventually stumble into, the way they inevitably rediscover π.
Limits of text
If everything important is happening in this shared map, what is text actually doing? Text is a projection. It is a lossy but very convenient interface to that map. When you say “tree,” you are not emitting the full bin over everything that has ever been tree-like. You are picking out a region in the shared representation space and letting your listener fill in the rest using their own copy of the map.
Because text has to run over a slow channel, we lean heavily on composition. You write “the blue‑green’s flashiness was like an algae lake’s last bloom before winter” and count on the reader’s map to supply the details. Under the hood, you are intersecting a bunch of fuzzy predicates. “Blue‑green” is a vague region of wavelength space and cultural color categories. “Flashiness” pulls in movement, contrast, and social connotations. “Last bloom before winter” drags in lighting, temperature, and season. Every extra phrase gives the reader another handle on the right patch of the map, but in terms of the original physical stimulus—exact spectral power distribution at the retina—you are losing precision.
In this picture, a purely textual “superintelligence” starts to look like a specialized proof engine. Instead of searching for formal proofs in a logical calculus, it is searching for chains of textual constraints that fit together. “Find every statement that operates on this bin, run them all against each other, and see what new statements fall out.” That gives you something like a theorem prover for ordinary language.
The scaling curves we see so far are not the “foom” many expected. Models are not shooting up to 300 IQ and making every human look like a goldfish. They are asymptotically creeping toward the smartest people we can find and then slowing down.
This is one reason to be suspicious of text-only reasoning scaling without friction. Each time you compose more symbols without checking back against reality, you give yourself room to drift. Large models mitigate this by averaging over huge corpora, but the basic geometry does not change. You are doing set operations over bins that were already fuzzy to begin with, so once you are a half-dozen operations deep, many of the “theorems” you derive about the world are really theorems about your own map’s distortions. The underlying physicality is too blurred to make further symbolic operations worth much.
Paradox of no one map
The one map hypothesis is a natural extension of Putnam’s externalism and a refusal of Quine’s strongest form of indeterminacy. It says the world is structured enough, and our pressure to predict and act is strong enough, that there is a unique best global way to line up sentences, brains, and events. Real projects like Project CETI’s sperm-whale work implicitly bet that there is one—or at least very few—such maps.
It is worth spelling out what you are signing up for if you reject this entire picture. Suppose you deny that there is a canonical shared map. Each mind now builds a separate conceptual space, shaped by its culture, body, and personal history. These spaces are not merely different coordinate systems on the same underlying geometry but fundamentally incommensurable.
Refusing this possibility has a few consequences. Semantic content becomes underdetermined by physics. Take a snapshot of one brain and another at the same time. If the hypothesized one map does not exist, then there is no procedure that can, even in principle, take those physical states and recover what our thoughts “mean” in a common language. At best, you can say “this pattern corresponds to that pattern,” but because the internal geometries are not aligned, there is no fact of the matter about whether one person’s “money” matches another’s. Any ability to be sure would only result from comparing it to the hypothetical one map you just denied.
You can accept all that, but you get a picture where meaning is deeply local, tied to practices and communities, and where cross‑mind understanding is always tentative. But you should notice what you have given up. You can no longer say, without qualification, that what a brain is doing is just physics plus a clever encoding. There is now a gap. Something more than physical regularity is needed to fix what our thoughts are about.
The one map hypothesis refuses that gap. There is one world, and one best way to compress its structure into a geometry useful for prediction and action. Every competent model of that world, biological or artificial, drifts toward that geometry as it improves. On this view, language is not a ghostly extra layer over matter. It is just how creatures index coordinates in the same shared map. Give up the map, and you must explain how disembodied “meaning” latches onto anything at all.
Tomorrow, how one map allows our ancient instincts to interface with the modern world.

whoa