The dependency trap
Are we building learning systems so good at removing friction that they're removing the struggle that actually builds thinking skills?

In my last post, I wrote about how we're building educational AI that optimizes for information delivery instead of learning. But there's a deeper problem lurking beneath the surface, one that won't show up in engagement metrics or completion rates.
So, imagine this. Your 10-year-old sits down for tonight's math session with their perfectly personalized learning system. It knows exactly where they struggled yesterday, those tricky word problems about rates and ratios. It serves up a perfectly calibrated problem, just hard enough to feel challenging but not so hard that they get frustrated.
When they get stuck, they ask for help. The system breaks it down step-by-step. The child follows along, gets the answer right, feels successful. The progress bar fills a little more. Another concept "mastered."
Everyone's happy. The kid feels smart, the parent sees measurable progress, the algorithm has optimized for engagement. But something crucial just didn't happen.
The child never had to sit with confusion. Never had to figure out what specifically was bothering them about the problem. Never developed their own approach or recognized their own patterns of thinking.
They just learned to be really good at asking for help.
The invisible erosion
This is the dependency trap that only few is talking about as we rush toward perfectly adaptive education. It's not that these systems don't work, they might work too well. They're so good at removing friction from learning that they're removing the friction that actually builds thinking skills.
Real learning happens in the struggle. When a kid stares at a problem for five minutes, getting increasingly frustrated, something important is developing. They're learning to tolerate confusion. They're figuring out how to break down their own thinking. They're developing what researchers call "productive failure" - the ability to learn from not knowing.
Adaptive learning systems optimize all of that away. Why sit with confusion when you can get an immediate explanation? Why develop your own problem-solving approach when the system has a proven method? Why learn to formulate questions when the algorithm already knows what you need?
The result feels like learning but it's actually something else entirely, it's learning to be dependent.
Scale makes it dangerous
Luis von Ahn, Duolingo's CEO, recently shared his vision for AI in education: schools will still exist, but mostly for childcare. The actual teaching will be done by algorithms, because "it's just a lot more scalable to teach with AI than with teachers."
Picture his scenario: a classroom where each student works through their personalized curriculum while a human supervisor keeps order. No more teachers noticing when a student is becoming overly dependent on help. No more humans to say "try it yourself first" or "sit with that confusion a little longer."
Just 30 kids, each getting perfectly optimized assistance, never developing the muscle of independent thinking.
This isn't some distant future. Companies like Squirrel AI are already doing this at scale. Their systems can identify knowledge gaps with precision and serve targeted practice immediately. Students never encounter the productive struggle of figuring out what they don't know - the algorithm already knows and serves the solution.
The GPS effect
We've seen this before. GPS made navigation so convenient that we stopped developing spatial awareness. People who grew up with turn-by-turn directions often can't navigate without them. They know how to follow instructions but not how to find their way.
Adaptive learning systems are doing the same thing to thinking skills. Kids are becoming excellent at following algorithm-generated problem-solving steps but terrible at developing their own approaches. They can execute the system's methods perfectly, but when they encounter something novel - or when the system isn't available - they're lost.
The dependency develops gradually, almost invisibly. It's not that kids suddenly can't think. They just get incrementally worse at the hard parts of thinking: sitting with uncertainty, formulating their own questions, recognizing the boundaries of their understanding.
You can see this happening already with adults and AI tools. Cut someone off from their LLM for a day and watch their workflow collapse. We've become so accustomed to immediate assistance that working without it feels broken, even for tasks we used to handle fine on our own.
Why the market won't fix this
The worst part is that market forces push toward dependency, not away from it. Learning platforms that make kids struggle will lose users to ones that provide immediate help. Parents will choose systems that show fast progress over ones that create productive frustration.
Success metrics make it worse. We measure engagement, completion rates, user satisfaction. A kid who's become dependent on algorithmic help will show excellent metrics - they're engaged, they complete lessons, they get answers right. The system looks like it's working perfectly.
But we're measuring the wrong thing. We're optimizing for immediate success rather than long-term thinking ability.
The real test comes later
The dependency trap won't show up in elementary school math scores. It'll show up in high school when kids encounter genuinely novel problems. In college when they need to think originally. In their careers when they face challenges that can't be solved by asking the right question.
We might be accidentally training a generation that's excellent at using learning tools but poor at independent reasoning. They'll know how to get answers but not how to think through problems. They'll be users, not thinkers.
Good teachers have always known this. They deliberately let kids sit in confusion longer. They give partial help, not full solutions. They ask questions instead of providing answers. They build tolerance for not knowing as much as they build knowledge itself.
Adaptive systems could theoretically do this too. But they won't, because confused users abandon apps. Struggling students leave bad reviews. The market rewards immediate satisfaction, not long-term thinking development.
What we're losing
Learning to think isn't just about solving problems. It's about developing the confidence to tackle the unknown. The patience to work through confusion. The skill to recognize what you don't understand. The creativity to try multiple approaches.
These aren't nice-to-have soft skills. They're the foundation of independent thinking. And we're systematically removing opportunities to develop them in the name of personalized, efficient education.
The irony is bitter. In trying to make learning more accessible, we might be making real thinking less accessible. In optimizing for immediate success, we might be undermining long-term capability.
The question we should be asking
It's not whether algorithms can teach better than humans. The question is whether we're building tools that help kids become independent thinkers or dependent users.
Because in a world where AI can solve most problems instantly, the most valuable skill might be the one we're accidentally training out of our children: the ability to think for themselves.