insights

The invisible gap

In this post I explore the disconnect between AI's rapid advancement in business and its absence in our children's education.

The invisible gap

While businesses race to adopt AI our education system remains strangely quiet on the subject. This silence is creating a growing disconnect between the world children are being prepared for and the world they'll actually enter.

Here's the reality I'm seeing at the moment: there is no widespread transition happening in at least in primary schools. While the world debates AI's impact on education, many of our classrooms remain completely untouched by this conversation. My 4th grader has never heard anyone mention AI at her school. Not once. The only exposure her class had was when her dad, volunteered to create workshops explaining AI concepts like LLM and ML to children. This kind of AI literacy shouldn't depend on chance connections to people who happen to work in the field.

Of course, there are individual teachers and forward-thinking schools experimenting with these technologies. Isolated classrooms where educators recognize the importance of preparing students for an AI-integrated future. But these remain exceptions rather than the norm - pockets of innovation in a system that has yet to develop a coherent approach to AI literacy. In fact, we're seeing active resistance in some areas - City of Helsinki recently banned teachers from using generative AI applications like ChatGPT in schools.

And it's not just Finland. This pattern appears across Nordic countries, and likely throughout much of the world. Even in regions known for progressive educational approaches, there's a paradoxical hesitancy about AI integration. In many schools, mentioning tools like ChatGPT almost feels taboo, treated as something forbidden rather than as technology worth understanding.

So why are schools at so resistant to adapting AI tools?

Some of it is understandable caution. Schools have the responsibility to create safe learning environments, and new technologies bring unknowns. There are legitimate concerns about data privacy, content appropriateness, and ensuring equal access among students.

But there's another practical factor. **Most AI tools weren't designed for educational purposes or for children. **These tools were built for adult users, not for learning environments or children's developmental needs.

The text-based interfaces of today's AI systems create immediate barriers for young learners. A 7-year-old who is still learning to write can't compose complex prompts. A 9-year-old might not know how to phrase questions in ways that AI systems understand. An 11-year-old struggles to process long blocks of text on a screen. While adults can reformulate their questions when they don't get useful answers, children quickly become frustrated and give up.

Current AI interfaces demand skills that developing minds simply don't have yet - from prompt engineering to filtering lengthy responses for relevant information. These systems make assumptions that users already understand basic concepts, can evaluate information reliability, and can maintain focus through detailed explanations. These assumptions rarely hold true for children.

This is where we're missing a crucial opportunity. The problem isn't just that AI tools aren't in classrooms, it's that the AI tools that exist weren't designed with children or learning in mind at all.

Think about how children actually learn. They need to connect new information to what they already know. They need to test their understanding and make mistakes. They need to apply knowledge in different contexts. And they need to follow their curiosity.

But current AI tools do almost none of this. When a child asks a question, AI delivers a direct answer and stops there. It doesn't ask "did you understand this?" or "let's try applying this to something else." There's no back-and-forth, no exercises, no testing of understanding. It's like giving a child math test answers without teaching them how to solve the problems.

What's worse, these systems make assumptions that don't match how children actually function. They assume users already know basic concepts, want comprehensive technical answers, can evaluate information reliability, can focus for long periods, and can handle abstract concepts. A child who wants to know why the sky is blue doesn't need a lecture on light wavelengths and atmospheric molecules.

And here's the real paradox: children could benefit enormously from AI if it were actually designed for them. Imagine a personal guide answering endless "why" questions without growing tired, or a tool that adapts explanations to a child's interests (dinosaurs! robots! horses!). Instead, we've created tools that exclude the very users who might benefit most from them.

So why is addressing this so difficult? Part of it comes down to the nature of our educational systems. Education as an institution is inherently conservative. It's designed to preserve and transmit established knowledge, not to rapidly incorporate emerging tools that might disrupt traditional assessment methods or challenge what "learning" means.

The structure of education itself resists rapid change. Curriculum development cycles are measured in years. Teacher training programs often lag behind technological developments. Schools are built around stability rather than adaptation.

There's also a fundamental resource problem. Introducing new tools requires time for professional development, technical support, and thoughtful integration into existing curricula. In systems where teachers are already stretched thin, adding "learn and implement AI tools" to their workload without additional support is impractical.

Another factor is the unprecedented complexity of these technologies. The leap from understanding basic digital tools to comprehending how LLMs work represents a significant knowledge gap for everyone - educators, administrators, and policymakers alike. This technical complexity makes it challenging to effectively evaluate these tools and their potential impact on education.

But perhaps most significant is the existential challenge AI presents to traditional educational models. If AI can generate essays, solve math problems, and provide instant feedback, what is the purpose of many classroom activities? This question is profoundly uncomfortable for a system built around specific ways of demonstrating knowledge.

The irony is that avoiding these tools doesn't protect students or the educational system. It just postpones the inevitable confrontation with these questions while leaving students underprepared.

So if banning AI tools doesn't work, and blindly adopting adult-focused AI isn't appropriate either, what's the answer? What's missing is a middle path: creating purpose-built educational AI tools while using existing AI as a subject of critical study. Rather than just adopting tools designed for adult professionals or banning them entirely, we need systems built around pedagogical needs and age-appropriate interfaces.

This middle path will likely emerge unevenly. With most schools moving slowly, the first educational AI applications are already appearing through parents supporting their children's learning at home. This creates an uncomfortable reality: children with tech-savvy, resourceful parents gain early exposure to AI literacy while others fall behind. This digital divide isn't hypothetical – it's actively forming.

Meanwhile, forward-thinking educators are finding ways to integrate AI tools that actually enhance rather than replace their teaching, using AI to provide students with personalized feedback on writing, create custom practice materials, or help explain difficult concepts through multiple approaches when a student is stuck. These concrete examples demonstrate how purpose-built educational AI can augment human teaching rather than threatening it.

While caution from educational institutions is warranted, there's a difference between thoughtful implementation and institutional paralysis. Schools don't need to completely reimagine their operations overnight, but they do need to begin systematic exploration and experimentation now. The current approach of either ignoring AI or treating it primarily as a cheating risk misses the fundamental shift these technologies represent.

By starting small – incorporating basic AI literacy into digital citizenship curriculum, providing teacher training on appropriate AI integration, allowing controlled experimentation with educational AI tools – schools can begin building the foundation for a more comprehensive approach. This measured adoption acknowledges both the reality that these technologies aren't going away and the legitimate pedagogical concerns educators have.

The stakes here are higher than many realize. Because ultimately, resistance to change doesn't stop change from happening. It just means we miss the opportunity to shape that change in thoughtful ways. And in education, where we're preparing children for their futures rather than our past, that's a missed opportunity we can't afford.