Why Yann LeCun Is Right About LLMs and Why His Theory Is Still Incomplete
Let’s start with something you already know.
You have sat through a meeting where one person talked beautifully for twenty minutes and solved absolutely nothing. Big words. Calm voice. Slides that look expensive. Everyone nods. The meeting ends. Nothing changes.
- That person is not intelligent.
- That person is fluent.
Now imagine building a trillion dollar industry around that guy.
That is roughly where AI ended up.
Language models talk well. Scarily well. They explain things. Summarize things. Apologize when they screw up. They sound thoughtful even when they are wrong. Humans fall for it because humans always fall for smooth talkers. We mistake confidence for competence. We always have.
Language models did not invent this problem. They exposed it.
If something speaks clearly, we assume it knows what it is doing. That instinct works okay with people. It works terribly with machines that have never touched the real world and have no idea why a dropped cup hits the floor instead of hovering politely.
This is where Yann LeCun shows up as the adult in the room.
He keeps saying the obvious thing nobody wants to hear. Predicting the next word is not intelligence. Talking about the world is not the same as understanding the world. You can train a system on every book ever written and it still will not know what a chair is for unless it understands sitting, balance, gravity, and pain.
On that point, he is completely right.
Language models are like a guy who has read every fitness book ever written but has never lifted anything heavier than a coffee mug. Ask him about squats and he gives you a flawless explanation. Ask him to move a couch and he throws out his back.
Reality is the couch.
So the correction sounds obvious. Stop teaching machines to talk. Teach them to see. Teach them to watch the world. Teach them physics. Motion. Cause and effect. Build world models. Systems that learn the way animals do. By observation, not narration.
This is not philosophy. This is survival. Robots do not care how well you explain gravity. They care whether you predicted the fall correctly.
So far, so good.
Then the correction goes off the rails.
Somewhere along the way, language stops being overhyped and starts being treated like poison. As if words themselves are the problem. As if once a system understands the world, talking becomes optional or even harmful.
That is where the theory breaks.
A system that understands the world but cannot plan is like a security camera.
Great eyesight. No idea what to do next.
Meaning without planning reacts fine and fails slowly. It handles the moment and forgets the future. It adapts locally and collapses globally. World models are excellent at perception. They are terrible at organization. They see what is happening and then wait.
Here is a simple picture.
Imagine a dog chasing a car.
The dog understands motion, speed, timing, and direction. It has a decent world model. Catching the car is not the problem.
Now imagine asking the dog where to drive once it catches it.
That is the missing layer.
Humans make this obvious if you are willing to look. Yes, babies perceive before they speak. That fact gets dragged out every time someone wants to dismiss language. But babies are not running civilizations. Everything that scales human intelligence arrives after language locks in.
Planning across years. Laws. Blueprints. Contracts. Science. Cooperation with strangers who will never meet.
Intelligence shows up early.
Civilization shows up late.
Language did not create intelligence. Language multiplied it.
Language is where plans stop evaporating. It is where ideas become portable. It is how goals survive contact with time. Strip language out entirely and you get systems that can act but cannot organize their own thinking across days, let alone years.
This is the mistake baked into the current debate. People think they have to pick a side. Either language models are fake intelligence or world models are the future. That is like arguing whether your brain or your spine is more important. Remove either one and see how far you get.
Intelligence is a stack.
At the bottom, you need grounding. Perception. Physics. Reality checks. This is where LeCun is absolutely right.
- Above that, you need planning. The ability to decide what should happen next, not just predict what will.
- Above that, you need memory. Continuity across time so decisions do not reset every moment.
- Above that, you need identity. A stable reference point that keeps the system from optimizing itself into nonsense.
Language lives up there. Not as poetry. As control.
Pure language systems float. They sound smart until reality slaps them. Pure world models grind. They see clearly and then stall. One talks without touching the world. The other touches the world without knowing what to do with it.
The uncomfortable truth is that language models revealed something real even while being wildly oversold. They showed that abstraction can emerge from pattern learning. That planning can appear without hand coded rules. That symbols do not have to be carved by engineers. Throwing that away because chatbots were annoying would be lazy.
Chatbots were a demo, not a destination.
The future does not belong to systems that only talk. It also does not belong to systems that refuse to talk out of principle. It belongs to systems that can see the world, plan within it, remember across it, and remain coherent while doing so.
This is why LeCun is right and still incomplete.
- He is right to tear down the illusion that fluent machines are thinking machines.
- He is right to demand grounding, causality, and real world understanding.
Where the theory stops short is here.
Understanding the world does not produce intelligence unless the system can decide, remember, and remain itself across time.
Without that layer, you do not get AGI.
You get very sophisticated sensors.
Burn down the fantasy that talking equals thinking. It deserves to burn. Just do not confuse the fire with progress.
Intelligence begins when a system knows what to do next and why it should keep doing it.
If this line of thinking resonates and you want to go deeper into recursive intelligence, identity stability, and systems that actually compound instead of reset, explore the work at https://ernestoverdugo.com/recursion