In this edition of the KidsAI Interview Series, we are joined by Dr. Nick Potkalitsky, an educator and systems consultant whose work spans AI integration, humanities pedagogy, and assessment reform. As the creator of Educating AI, his research and reflections offer crucial guidance for how schools can implement generative AI not as a trend—but as a thoughtfully embedded learning companion. His emphasis on ethical infrastructure, narrative design, and student agency aligns closely with our mission at KidsAI.
Evren Yiğit: Nick, welcome. AI literacy is no longer a futuristic concept, it’s a foundational skill that intersects with how children think, create, and make sense of the world. But implementing AI well in schools requires more than just access to tools. What does a strong infrastructure for AI integration in education look like to you, and what do you think most schools or systems are currently missing?
Nick Potkalitsky: Thank you, Evren. Most schools think they’re implementing AI when they’re really justbuying access to corporate platforms. They’re confusing convenience with infrastructure, and it’s putting students at risk.
Real infrastructure starts with data sovereignty. Too many districts are using tools that harvest student prompts and behavioral data for corporate training purposes. We need secure, district-managed access points that comply local, national, and international safety standards, not open doors to platforms that treat children’s thinking as a product.
But the deeper issue is equity by design. I see schools widening achievement gaps by assuming all students will use AI as an amplification tool. Without intentional scaffolding for students who lack foundational skills or home support, AI becomes just another way to sort kids into winners and losers. Strong infrastructure means ensuring tools work on low-spec devices and creating classroom routines that don’t assume outside AI access.
Evren Yiğit: It is very important that you are framing “AI integration” as something deeper than tool access starting with data sovereignty and equity by design. One of the patterns we’re seeing in global classrooms is enthusiasm without preparation. Before scaling AI use among students, what do you believe are the essential steps schools and educators should take to lay the groundwork for ethical, purposeful adoption?
Nick Potkalitsky: The first essential step is admitting that most teachers are months to years behind their students in AI experience, yet we’re asking them to guide ethical use. This creates an impossible situation where educators lack the pedagogical content knowledge to understand when AI serves learning versus when it undermines it.
We need what I call “cognitive safety” measures. Just as we ensure physical safety in schools, we need to protect the developmental experiences children need. This means understanding sensitive periods where AI assistance might actually hinder skill development—like using AI to write essays before students have developed their own voice and argumentation skills.
Schools also need to move beyond reactive policies shaped by corporate platforms and student misuse. Instead, start with educational goals and community values. Ask: what cognitive work do we want to preserve for students themselves? What experiences are essential for human development that can’t be outsourced to machines?
Most importantly, create disclosure-based cultures where students, parents, and educators can openly discuss both the benefits and concerns of AI use without fear of punishment or judgment.
Evren Yiğit: What kind of research frameworks or evaluation criteria do you think we should focus on if we want to understand the real educational impact of AI?
Nick Potkalitsky: We’re seeing too much evaluation focused on efficiency metrics rather than human development. The question shouldn’t be “how fast can students complete tasks with AI?” but rather “how does this support their cognitive and creative growth?”
Our frameworks must prioritize what I call “learning-first” evaluation. The most important criterion is preserving essential cognitive work. We need to assess what mental tasks AI does for students versus what students must do themselves to develop crucial capabilities. If AI is replacing the struggle of organizing ideas or developing arguments, we’re undermining the very skills we’re trying to build.
The real question isn’t whether AI makes students more efficient, but whether it supports their growth into critical thinkers who can navigate an AI-rich world responsibly.
Evren Yiğit: Exactly, “learning-first” evaluation is a very important lens. It shifts the focus from efficiency to actual growth. The pace of AI development can lead to reactive decisions in education. Many tools are designed for adult users and quickly ‘adapted’ for students. What risks do you see in the way AI is currently being introduced in learning environments, and how can those risks be mitigated through better design and planning?
Nick Potkalitsky: The reactive path—banning AI rather than thoughtfully integrating it—creates its own problems. When schools simply prohibit AI use, they’re not preparing students for a world where AI is ubiquitous.
But I’m more concerned about “uncritical adoption”—tools purchased and encouraged without clear pedagogical frameworks. This leads to technology driving instruction rather than serving learning goals. I see districts becoming overly dependent on AI tools from technology companies whose primary markets aren’t education, making schools vulnerable to changes driven by business needs rather than learning outcomes.
We’re also seeing unreliable AI detection software create cultures of surveillance rather than learning, eroding the trust between teachers and students that’s essential for education. Meanwhile, policy is being shaped by corporate platforms and student experimentation rather than professional expertise and community values.
The solution requires building internal expertise and choosing technology partners who prioritize transparency, data privacy, and educational values. Most importantly, we need to rethink assessment entirely—moving toward authentic learning processes that require human judgment and cannot be easily replicated by AI.
Evren Yiğit: One thing we ask often at KidsAI is: what does age-appropriate AI really look like? From your perspective, what should an effective AI literacy framework include for primary and secondary learners—especially if we want to go beyond basic prompt engineering?
Nick Potkalitsky: Age-appropriate AI is about understanding how AI reshapes thinking at different developmental stages. What I call the “hard problem” of AI literacy is helping students navigate this cognitive partnership responsibly.
For younger students, we start with fundamental questions: “When is it okay to ask for help?” “What makes work ‘mine’?” We use unplugged activities to help them distinguish human versus machine thinking before any direct AI access.
As students mature, they grapple with more complex scenarios about fairness, bias, and what they owe others when using AI. High schoolers should explore how AI choices reflect their values and the larger economic and social forces shaping AI development.
The ultimate goal is cultivating what I call “architects of their own intellectual work”—students who can thoughtfully engage with AI while preserving their own cognitive capabilities and creative voices. This isn’t about following rules; it’s about developing the judgment to know when AI enhances learning and when it interferes with it.
Evren Yiğit: I want to repeat it: helping students become “architects of their own intellectual work” as it is crucial. It also raises the question of co-design. We advocate for co-design and participatory research, especially when developing tools for children. What role do you believe educators and students should play in shaping or testing AI tools before they are fully implemented in classrooms?
Nick Potkalitsky: Absolutely essential. In my AI Theory and Composition class, students conducted a methodical three-part research study that revealed something surprising: students are as deeply concerned about cognitive offloading as teachers are. This challenges the assumption that young people just want AI to do their work for them—they actually worry about losing essential thinking skills.
Students should help shape the ethical guidelines surrounding AI use in their classrooms. This promotes ownership and responsibility rather than compliance. When we involve diverse stakeholders in shaping AI’s educational role, we move from reactive adoption to thoughtful integration grounded in our educational values.
Evren Yiğit: I completely agree with you, students, children in general should definitely have a say in their future. You’ve spoken and written about the need to rethink assessment in the age of AI. As schools explore alternatives to traditional essays and exams, how do you see AI supporting more authentic and creative forms of student assessment?
Nick Potkalitsky: AI hasn’t created an assessment crisis, it’s revealed that our assessments have been broken for a long time. Traditional methods that focus on final products rather than learning processes were already inadequate for understanding authentic student thinking.
This creates an opportunity to design AI-free spaces within our classrooms where students engage in the cognitive work that builds capacity. I’m a strong advocate for shifting all homework to in-class work, where we can observe and support the learning process directly rather than wondering what happened outside our walls.
Assessment should focus on how students think through problems in real time—their reasoning, their questions, their collaborative discussions. Oral presentations, Socratic seminars, hands-on projects, and reflective conversations reveal far more about student understanding than any written output ever could. Yes, this process-oriented assessment is extremely time-consuming, but it’s the only way to truly understand learning.
Evren Yiğit: As we integrate AI into learning environments, there’s a risk of losing some of the nuance, sometimes messiness, and also the joy of the learning process. What do you think we should be most careful not to lose as AI becomes more present in classrooms?
Nick Potkalitsky: We must preserve what I call the “question development that drives genuine learning.” AI providing answers too easily can short-circuit the process where students grapple with ideas and make unexpected connections.
AI should serve rather than replace the essentially human work of learning, growing, and becoming.
Evren Yiğit: Yes, and if we want students to be able to navigate this landscape, they’ll need more than just technical skills. What mindsets or habits do you think students will need most to thrive in AI-rich environments?
Nick Potkalitsky: The real challenge in upper-level classes will be staging experiences that allow students to work through complex decision trees about AI use and approach the development of an internal AI policy. This requires giving students genuine choices about what they’re studying and how their studies are measured.
Students need to grapple with nuanced scenarios where AI use isn’t clearly right or wrong, but depends on context, purpose, and their own learning goals. They need practice navigating these decisions in environments where they can experiment, reflect, and refine their approach without punitive consequences.
This kind of sophisticated decision-making can’t develop in overly restrictive environments. Students need the freedom to make choices—and mistakes—as they develop their own frameworks for ethical and effective AI use. The goal is cultivating internal compass, not external compliance.
Evren Yiğit: Exactly—and the more we speak to others in this space, the clearer it becomes that many of us are trying to solve the same problems in isolation. What would it look like to bring those efforts together? If you could bring together your dream coalition to support ethical and inclusive AI in education, who would be involved, and what would their priorities be??
Nick Potkalitsky: Given the “wicked problems” AI presents in education, fragmented approaches won’t work. We need collaboration centered on shared responsibility and human-centered design.
The coalition I envision would focus on joint development of ethical frameworks grounded in rigorous research, where developers learn about child cognitive and emotional development from experts in those fields. We’d see collaborative research focused on pedagogical impact rather than just academic performance, with researchers partnering with classroom educators to understand AI’s effects on cognitive development and socio-emotional well-being.
Most importantly, we’d co-design AI tools that amplify human capabilities rather than prioritize institutional efficiency. This means creating tools collaboratively with educators, students, and communities to serve student development and preserve opportunities for authentic creativity.
This collaborative model recognizes that AI integration is a continuous process of learning and adaptation, not a one-time event. The goal is ensuring AI serves students and communities rather than dictating educational goals.
KidsAI Closing Insights
Dr. Nick Potkalitsky’s interview brings a powerful recalibration to the conversation around AI in education. He reminds us that true AI integration is not about tool access, but about building secure, equitable infrastructure that centers data sovereignty and learning equity especially for students without consistent access to technology. His call for “learning-first” evaluation frameworks challenges the obsession with efficiency, urging educators and developers to measure how AI supports critical thinking and cognitive development, not just speed. Nick’s emphasis on professional readiness highlights the gap many teachers face as they try to guide students through tools they themselves are still learning. We were particularly struck by his insistence that AI should never replace the core messiness, nuance, and joy of learning. Instead, it should scaffold thinking and help students become, in his words, “architects of their own intellectual work.” His approach to AI literacy, starting with unplugged thinking, then building ethical awareness, offers a blueprint for age-appropriate design that evolves with the learner. Finally, his vision of cross-sector coalitions grounded in pedagogy and child development resonates deeply with KidsAI’s mission. Nick’s reflections push us to think more boldly and more humanely about how we build, implement, and measure AI in the lives of children.