"The question is not what you look at, but what you see." — Henry David Thoreau
In Philip K. Dick's Do Androids Dream of Electric Sheep?, the Voigt-Kampff test distinguishes humans from replicants—but by the novel's end, the protagonist can no longer tell which reality is authentic. In Asimov's Foundation, Hari Seldon encodes tacit knowledge about human behavior into mathematical models that reveal patterns invisible to individual consciousness.
We're living at the convergence of both stories.
By 2028, the mechanisms for establishing shared, verifiable reality will break down. Not from political polarization or filter bubbles—but because the systems mediating what we know will be controlled by autonomous agents whose reasoning no human can audit or comprehend.
Machines already make most of our decisions. The question is whether those machines encode your reality or someone else's—whether you control what's knowable, or become trapped watching shadows, unable to distinguish projection from source.
This isn't about AI safety or job displacement. This is about who decides what's true when system complexity exceeds what we are able to comprehend.
When Shared Reality Collapses
In Plato's Cave, prisoners watch shadows on a wall and mistake them for reality—projections of objects they've never seen. One escapes, sees the actual objects, returns to tell the others. They don't believe him. The shadows are their only shared reality.
In 1931, Gödel proved that any sufficiently complex formal system contains truths unprovable within the system itself. The prisoners can't determine which shadows are "real" without leaving the cave. The system can't validate itself from inside.
By 2028, we will have entered a cave where each person sees different shadows—with no way to determine which corresponds to reality (social algorithm anyone?). Not from polarization, but because the machines mediating what we know will reason in ways no human can audit and why many people in tech explain what they’ve built as “black boxes”.
This is the real weapon of the AI age. Not misinformation—but the erosion of the possibility of shared, verifiable truth.
It's already happening. Algorithmic systems execute the majority of equity trades on models whose emergent interactions no one understands. No single person can verify supply chain integrity across 200+ countries. Generative AI produces content indistinguishable from authentic journalism. LLMs synthesize "answers" with no provenance chain.
Each system operates in its own reality. Each produces shadows that look real. But we've lost the ability to step outside the cave—because the cave now encompasses everything we know, but there are steps we can take to avoid this.
We're building the infrastructure to solve this problem at Accrete. If any of this resonates, we'd welcome the conversation.
Over the coming weeks, we'll walk through why shared reality is breaking down, who stands to control what replaces it, and what you can do about it. Next up: the difference between a machine that thinks and a machine that only looks like it does.
Works Referenced:
Dick, Philip K. Do Androids Dream of Electric Sheep? Doubleday, 1968.
Asimov, Isaac. Foundation. Gnome Press, 1951.
Hofstadter, Douglas R. Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books, 1979.
Plato. The Republic. Translated by Benjamin Jowett, Oxford UP, 1888.
Gödel, Kurt. "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I." Monatshefte für Mathematik und Physik, vol. 38, 1931, pp. 173–198.
Korzybski, Alfred. Science and Sanity: An Introduction to Non-Aristotelian Systems and General Semantics. International Non-Aristotelian Library, 1933.
Thoreau, Henry David. Walden. Ticknor and Fields, 1854.