Learning How to Learn (With AI)
February 3, 2026

Anthropic just published a randomized controlled trial studying how AI assistance affects skill development in software engineers. The headline finding: participants using AI scored 17% lower on a mastery quiz than those who coded by hand—nearly two letter grades.
The reactions will write themselves. Some will read this as vindication: “See? AI makes us dumber.” Others will dismiss it: “We’ll adapt.” Both miss what the study actually found.
The study’s most interesting finding isn’t that AI hurts learning. It’s that how you use AI determines whether you learn at all.
The Research, Briefly
Anthropic recruited 52 software engineers and had them learn a new Python library (Trio, for asynchronous programming). Half could use an AI assistant; half coded by hand. Both groups were quizzed afterward on concepts they’d just used.
The AI group scored worse overall, especially on debugging questions—the ability to identify when code is wrong and why it’s failing. But within the AI group, outcomes varied dramatically based on interaction patterns.
Low-scoring patterns (average under 40%):
- Full delegation to AI
- Starting independently but progressively handing everything off
- Using AI to debug without building understanding
High-scoring patterns (average 65%+):
- Generating code, then asking follow-up questions to understand it
- Requesting code with explanations
- Asking conceptual questions while coding independently
That last group—the “conceptual inquiry” pattern—was the second-fastest overall, right behind full AI delegation. They got both speed and learning.
This Isn’t New
Here’s what struck me: we’ve known this about learning for decades.
Lectures are one of the least effective ways to transfer knowledge. Reading isn’t much better. We learn through struggle, application, and repetition. This is why a six-month coding internship often teaches more than a computer science degree focused purely on academics—not because the degree lacks content, but because building real things with real stakes forces a different kind of engagement.
It’s why medical residencies exist. It’s why apprenticeships work. It’s why you remember the bugs you debugged at 2 AM but forget the tutorial you read last week.
AI doesn’t change this dynamic. It just makes it easier to skip the hard part.
The Wrong Question
Much of the discourse around AI and skills asks: “How does AI impact learning?”
This treats AI as the variable and learning as the constant. But learning was never a constant. We’ve always had choices about how much effort to invest, when to take shortcuts, and whether to optimize for speed or depth. AI just makes those choices starker.
The better questions are:
- What are the most effective ways to learn?
- What should we be learning?
- Given that AI tools are a given, how should we work?
These reframe the conversation from “is AI good or bad for learning” to “how do we design our work and development in an AI-augmented world.”
Judgment Is the Curriculum
In my last post, I wrote about how entry-level tech work is transforming—from grunt work to decision-making under uncertainty. The training ground has shifted from “write the boilerplate” to “evaluate whether the AI’s boilerplate fits the problem.”
The Anthropic research reinforces this. The skill most damaged by passive AI use was debugging: understanding when something is wrong and why. That’s judgment. That’s the thing that becomes more valuable as code generation gets easier.
So here’s the reframe: the curriculum for working in an age of AI isn’t “learn to code” or “learn to prompt.” It’s “learn to evaluate.” Learn to know when something is right. Learn to know when something is wrong. Learn to know when you don’t know.
This requires a different kind of practice. Not “generate and ship” but “generate, understand, then ship.” The participants who scored highest weren’t avoiding AI—they were using it as a comprehension tool, not just a production tool.
Intentional Friction
There’s a growing movement in software development around “learning modes” in AI tools. Claude Code has a Learning and Explanatory mode. ChatGPT has a Study Mode. These are designed to add friction back in—to slow you down just enough to build understanding.
This is directionally right, but it can’t be the whole answer. Modes are opt-in. Under deadline pressure, who opts for the slower path?
The real answer is cultural. It’s about how teams and organizations value the development of their people alongside the delivery of their projects.
When I wrote about entry-level jobs growing up, I emphasized that agency without scaffolding is abandonment, but scaffolding without agency is busywork. The same applies here. Tools that force learning are paternalistic if they prevent you from getting work done. Tools that skip learning are negligent if they let you ship code you don’t understand.
The balance is in how we work together—junior and senior, human and AI. More pair programming, not less. More “let’s talk through your thinking,” not just code reviews after the fact. More explicit investment in understanding as a deliverable, not just working software.
What We Should Be Learning
If AI handles syntax and boilerplate, what should humans be getting better at?
- Systems thinking: Understanding how pieces fit together, not just what each piece does
- Failure analysis: Knowing what can go wrong and why. Not just “the test failed” but “the test failed because async operations don’t guarantee ordering, and I assumed they did.”
- Problem framing: Asking the right question before generating any solution
- Quality judgment: Recognizing good from bad, elegant from hacky, appropriate from over-engineered
- Communication: Explaining your thinking to humans who need to extend or maintain your work
None of these are new skills. They’re the skills that made senior engineers valuable before AI and will make them valuable after. What’s changed is that junior engineers need to develop them earlier, because the old stepping-stones have eroded.
The Real Insight
The Anthropic study concludes with an observation that stuck with me: “Productivity benefits may come at the cost of skills necessary to validate AI-written code if junior engineers’ skill development has been stunted by using AI in the first place.”
This is the trap. You get faster today by skipping understanding. Tomorrow, you can’t catch the AI’s errors because you never developed the judgment to recognize them. The speed compounds—until it doesn’t.
But here’s the flip side, which the study also shows: you can have both. The participants who asked conceptual questions, who used AI to deepen understanding rather than bypass it, were both fast and learned. They got the productivity and built the skills.
It’s not AI versus learning. It’s passive AI use versus active AI use. Delegation versus collaboration.
The Work Now
So given that AI tools are a given, how should we work?
Generate, then understand. Don’t ship what you can’t explain. If the AI wrote it and you can’t walk through why it works, you haven’t finished the task.
Ask why, not just what. When AI gives you code, ask for the explanation. When it suggests an approach, ask about alternatives. The extra minute builds the mental model.
Seek the struggle. Not artificially—there’s no virtue in suffering for its own sake. But when you hit a real obstacle, resist the urge to immediately hand it off. The debugging you do yourself is the debugging skill you develop.
Teach what you learned. Nothing solidifies understanding like explaining it to someone else. Pair programming, documentation, team discussions—these aren’t overhead, they’re how learning becomes durable.
Protect learning time. Not every task needs to be a learning opportunity. But some should be, explicitly. Build this into how you plan work.
The companies that figure this out will have something their competitors can’t easily buy: people who know how to think, not just how to prompt.
This post builds on ideas from Entry-Level Tech Jobs Aren’t Dying. They’re Growing Up. and The Year of (Your) Agency. The Anthropic research is available here. If you’re thinking about how to develop your team’s skills alongside AI adoption, I’d love to hear what’s working—reach out on LinkedIn or Bluesky.