Artiklar

Why AI Can’t Learn Like You Do (And Why That Matters)

A transparent cube on a pitch black background.

Andrej Karpathy said something in a recent interview on the Dwarkesh Podcast that stopped me cold: ”When you’re reading a book, the book is a set of prompts for you to do synthetic data generation.” You’re not just passively absorbing words. You’re testing ideas against your experience. You’re arguing with the author in your head. You’re connecting what you’re reading to something you learned five years ago, or something that happened last Tuesday. That’s not reading. That’s thinking. And here’s the thing, AI doesn’t do this.

December 3, 2025

Pattern-Matching Isn’t Learning

LLMs are extraordinary pattern-matchers. They’ve seen trillions of tokens. They can predict what comes next with uncanny accuracy. They can write like Shakespeare, code like a senior engineer, explain quantum physics like a patient professor.

But they’re not learning the way you learned any of those things.

When GPT-5 reads a sentence, it’s predicting the next token. When you read a sentence, you’re doing something completely different. You’re:

  • Questioning whether you agree
  • Imagining alternative scenarios
  • Connecting it to lived experience
  • Generating your own examples to test the idea
  • Challenging assumptions
  • Building mental models

You manipulate information. AI consumes it. That difference might be everything.

The Problem with Perfect Memory

Here’s the paradox: LLMs are too good at memorization.

If you train a model on a random sequence of numbers—completely meaningless data—it’ll memorize it after just one or two passes. A human couldn’t do that if you paid them. We’d forget it instantly.

But that’s actually a feature, not a bug.

Because we’re so bad at memorizing random sequences, we’re forced to look for patterns that generalize. We extract principles. We build frameworks. We see the forest, not just the trees.

LLMs see every tree. In 4K resolution. From every angle.

Karpathy calls this the problem of the “cognitive core versus memory.” The models are distracted by all the facts they’ve stored. They lean on memory when they should be reasoning. They recite when they should be thinking.

What’s Missing? The Reflection Loop

Think about the last time you learned something difficult. Really learned it, not just skimmed it. You probably didn’t just read it once and move on. You:

  1. Encountered the concept
  2. Tried to apply it
  3. Failed or succeeded in interesting ways
  4. Reflected on why it worked or didn’t
  5. Adjusted your mental model
  6. Tried again

That loop: encounter, apply, reflect, adjust, is how human intelligence compounds over time.

AI training has steps 1 and 2. Sort of. But steps 3-6? Mostly absent.

Current reinforcement learning tries to approximate this by running hundreds of attempts in parallel, then upweighting the trajectories that got the right answer. But as Karpathy puts it, that’s “sucking supervision through a straw.”

You did all this work, maybe a minute of thinking, trying different approaches, and at the end you get a single bit: right or wrong. Then you broadcast that signal backwards across everything you did.

It’s stupid. It’s noisy. A human would never learn that way.

Does This Mean AI Hits a Ceiling?

Some people think yes. They believe that without the fundamental learning mechanism humans have—the ability to manipulate information, generate synthetic examples, build culture—AI can’t surpass human-level intelligence no matter how big the models get.

I’m not sure I buy that completely. But I do think it means the path is different than most people assume.

We’re not going to scale our way out of this by making models 10x bigger. We need them to learn differently. To think, not just predict. To build, not just pattern-match.

What This Means for You

If you’re building with AI right now, whether at a company, a startup, or just in your daily work, this matters.

It means:

  • Don’t expect the AI to learn from interaction the way a human would. It won’t remember your conversation tomorrow. It won’t connect what you told it last week to what you’re saying now. It’s starting fresh every time.
  • Treat it like a savant, not a collaborator. It has perfect recall of billions of documents but can’t manipulate information the way you can. You have to be the one doing the synthesis, the connection-making, the strategic thinking.
  • The taste gap isn’t going away. We need to be the creatives because since AI learns by pattern-matching, it will always default to the most common patterns. It takes a human to know when to break them.

The models will get better. They’ll get faster, cheaper, more capable in narrow ways.

But until they learn like we do, until they can read a book and argue with it, generate their own examples, build on each other’s ideas, they’re not near replacing human intelligence.

And that means the humans who know how to think, how to connect ideas, how to manipulate information instead of just consuming it? Those are the ones who’ll stay indispensable.

Written by Nina Amjadi, program director for the Content Engineering program.