insights

AI tutors are solving the wrong problem

Guess what. The part of education that needs automating isn't the teaching.

AI tutors are solving the wrong problem

Your 10-year-old stares at their math homework, pencil hovering over a word problem about trains traveling at different speeds. They know they need help, but can't quite articulate what they don't understand. Is it the concept of relative velocity? The way the problem is worded? Or something more fundamental about how distance, time, and speed relate?

Meanwhile, across the country, an AI tutor waits patiently for the perfect question that would unlock the perfect explanation.

This is the central tension in how we're building educational technology: we keep trying to automate teaching when what students actually need is help learning.

The seductive promise of AI teachers

Everyone's building AI tutors with the same pitch: "Ask me anything, I'll teach you." The market opportunity is massive - $20 billion by 2027 - and the promise feels revolutionary. Finally, every child could have access to infinite, personalized instruction.

But watch a real kid try to use these tools. They type "I don't get this" and receive a lengthy explanation that somehow misses exactly what's confusing them. Or they ask for help with homework and get a generic lesson that doesn't connect to what they're supposed to be learning.

The fundamental problem isn't technical, it's philosophical. We're optimizing for the wrong thing.

Learning isn't information transfer. It's not even problem-solving, exactly. Learning is the gradual construction of understanding through practice, struggle, and the slow recognition of patterns.

When a child finally grasps that fractions are really just another way to express division, that breakthrough doesn't come from a perfect explanation. It comes from encountering the same concept in different contexts until something clicks. The "aha" moment happens in the space between confusion and clarity, not in the delivery of information.

But our AI tutors are built on an information-delivery model. They assume learning happens when the right explanation meets the right question. Ask about photosynthesis, get an explanation of photosynthesis. Ask about fractions, get a lesson on fractions.

This misses how learning actually works: through repeated encounters with ideas in slightly different forms, each building on the last, until understanding emerges.

The "ask me anything" model puts enormous cognitive burden on the learner. To get help, a student must:

  • Recognize what they don't understand
  • Articulate that confusion clearly
  • Ask questions that will elicit useful responses
  • Navigate through explanations that may or may not address their actual confusion These are sophisticated meta-learning skills that many adults struggle with. We're essentially requiring children to be expert questioners before they can access help with basic concepts.

Watch a struggling student work with a human tutor. The magic doesn't happen when the student asks the perfect question. It happens when the tutor notices the small hesitation before an answer, the way the student holds their pencil, the particular type of mistake they keep making. The tutor responds to signals the student doesn't even know they're sending.

A different path: test prep shows the way

Interestingly, one company has cracked this puzzle - but in a very specific domain. Riiid, a South Korean startup with $250M in funding, doesn't try to have conversations with students at all. Instead, they've built something more like a diagnostic engine.

Their system can predict a student's TOEIC score with 95% accuracy after just 12 questions. Not because it's great at explaining English grammar, but because it's learned to recognize patterns in how students think. It knows that someone who confuses present perfect with simple past will likely make specific types of errors on listening comprehension too.

The student doesn't need to ask for help with "past participles" - the system identifies the knowledge gap and serves targeted practice. No conversation required.

This works because standardized tests have finite, predictable patterns. But it points toward a different philosophy: instead of waiting for students to ask the right questions, identify what they need from how they work.

The best teachers don't just deliver information, they read their students. They notice when attention drifts, when confidence wavers, when understanding starts to dawn. They adjust not just content but pacing, challenge level, and emotional support based on hundreds of tiny signals.

Current AI tutors are trying to automate the teaching without understanding the learning. They focus on generating explanations rather than recognizing understanding.

Questions aren’t the answers

Real learning support doesn't start with "What don't you understand?" It starts with "What are you working on?" and builds from there.

Instead of asking students to diagnose their own confusion, effective learning tools should:

  • Recognize struggle before the student can articulate it
  • Provide practice opportunities that build understanding gradually
  • Adapt based on how students work, not what they say
  • Create contexts where learning can emerge naturally This isn't about replacing human teachers, it's about building tools that work more like humans do. Tools that read between the lines, that understand learning as a process rather than an event.

Beyond the conversation

The AI tutor gold rush will eventually settle. Companies are discovering that reliable educational AI requires far more human oversight than anyone expected. The market is crowded with similar solutions that don't quite work.

But this moment of disappointment is actually an opportunity. Instead of building better chatbots, we could build tools that understand learning itself. Tools that work with how children actually think and behave, not how we think they should think.

The breakthrough won't come from AI that can answer any question. It'll come from AI that knows what questions are worth asking, and when asking isn't necessary at all.

Learning has never been about having access to information. In an age of infinite information, it's about building understanding. And understanding can't be automated, but it can be supported, nurtured, and recognized.

The question isn't whether AI can teach. It's whether we can build AI that truly helps people learn.