This post is part of an ongoing series examining the forces reshaping how organizations establish truth, make decisions, and maintain control in an AI-driven world, from the economics of intelligence to the architecture of what comes next.
In "Solving the 'Pilot Purgatory' – A Delivery Framework for Decision Velocity," Matt Sheehan argues that most organizations are stuck building "better mirrors" when they should be building "better engines,” admiring the pixels of their data instead of mastering the physics of their operations. If he's right that the antidote is a Decision Architecture built on causal reasoning, then the stakes are higher than operational efficiency. Pilot Purgatory isn't just a delivery problem, it's the reason organizations won't be ready when verification failures start compounding.
Alap Shah's "The 2028 Global Intelligence Crisis" projected 10.2% US unemployment, widespread white-collar displacement, and economic collapse as AI agents replace human cognitive labor.
But Shah is optimistic. The crisis won't be economic, it will be knowledge-based.
Job displacement is real, but it's a symptom. When AI agents make decisions faster than humans can audit, when supply chains operate beyond human comprehension, when financial markets trade on models no one understands (as they already do) we don't just lose jobs, we lose the ability to agree on what's real.
Shah's "Ghost GDP" phenomenon names the gap: output rises while the real economy weakens. The numbers say one thing, lived experience says another, and there's no way to reconcile them because the systems generating both are black boxes.
The feedback loop he identifies: layoffs accelerating and AI adoption accelerating further displacement is real. But it's in service of something more concerning: the automation of truth itself.
Shah predicts an economic crisis by 2028. The knowledge crisis arrives first. The first major "verification incident" will likely be a Fortune 500 company suffering catastrophic losses from AI-generated misinformation in their supply chain that was undetected for months. Not fraud. Not malice. Just unverifiable complexity compounding until collapse.
His timeline is optimistic because he assumes we have until 2028 to prepare. The loss of shared reality is already underway, we just don't have the language for it yet.
Shah diagnoses the timeline, LeCun explains why the technology can't stop it.
LeCun's House Cat Problem
Yann LeCun, Meta's former Chief AI Scientist and Turing Award winner, delivered the sharpest critique of modern AI from inside the industry itself: "LLMs are not smarter than a house cat"
This isn't hyperbole. It's architectural truth.
A cat navigating a room predicts consequences, understands object permanence, and updates its world model in real time. An LLM that writes dissertations on quantum mechanics cannot predict what happens when you drop a glass. It lacks grounding, connection to how the physical world actually works.
LeCun's solution is world models: AI systems that learn abstract representations of reality through observation, enabling genuine causal reasoning. Where LLMs learn from text, capturing perhaps 0.1% of human experiential knowledge, world models learn from sensory experience, building internal simulations of how reality operates.
But world models alone won't solve the epistemic crisis.
The limitations are already visible: computational costs that make real-time simulation prohibitive, poor generalization to novel scenarios, and pattern-based reasoning that falls short of genuine causal understanding. More fundamentally, a perfect physics simulator is useless for making business decisions, diagnosing supply chain failures, or detecting foreign influence operations.
World models give you physics grounding. Knowledge engines give you decision grounding — the ability to encode why decisions were made, what tacit knowledge informed them, and how to verify they're correct.
Shah's clock is already running. LeCun's architecture isn't ready. The path forward isn't LLMs or world models, it's knowledge engines that integrate both, plus something neither has: persistent, verifiable memory of the tacit knowledge that drives every real decision humans make but rarely write down.
Next up in Part 5: why the venture capital firm that called knowledge graphs a trillion-dollar opportunity got the architecture fundamentally wrong — and why a "faster filing system" is the last thing enterprise AI needs.
Sources
LeCun, Yann. Quoted in "Meta AI Chief Yann LeCun Notes Limits of Large Language Models." Economist Writing Every Day, 2024.
Quoted in "Future of AI: Not LLMs, Yann LeCun." Shaastra, IIT Madras, 2024.
Quoted in "Yann LeCun Says LLMs May Be Passing Exams But Will Still Fail." India Times, 2024.
Shah, Alap. "The 2028 Global Intelligence Crisis." Substack, 2024.
