Table of Contents
AI in Education has moved past the “try it and see” phase in K–12, and districts are feeling that shift in real time. In a recent CoSN Podcast episode, host Stephen Gilfus brings together Dr. Tom Ryan and Narayan Nandigam (Nandi) to explore what’s changing, what leaders should prioritize, and what responsible adoption looks like when AI is already in teachers’ hands.
From the opening minutes, Gilfus makes it clear this isn’t a conversation about a single product or pilot. Instead, he frames a more pressing leadership challenge: how school systems move from scattered experimentation to meaningful, measurable impact, without losing sight of trust, equity, privacy, and the human relationships at the center of learning.
Who’s in the conversation, and why it matters
Gilfus brings a rare blend of education and industry experience, as a founder of Blackboard Inc. to advising organizations on AI, cybersecurity, and digital transformation. Throughout the episode, he serves not just as a moderator, but as a guide, consistently connecting classroom realities to system-level implications.
Dr. Tom Ryan offers the long view. A longtime CIO, former CoSN Board Chair, and national leader in digital transformation, Ryan has lived through decades of technology cycles in schools. That experience shapes his skepticism of hype and his insistence on grounding AI adoption in instructional and organizational reality.
Narayan “Nandi” Nandigam, Vice President and Global Head of Services and Education at Infosys, adds a global and cross-industry perspective. His lens connects K–12 classrooms to enterprise systems, data foundations, and large-scale transformation efforts already underway around the world.
Why this moment feels different
Gilfus opens with a deceptively simple question: what feels different about AI compared to earlier waves of education technology?
Ryan’s answer is direct. This is not a traditional edtech rollout. “You can easily get into it on your own,” he explains, pointing to research showing that a majority of teachers are already using AI_attach AI—often without district-provided systems, training, or guidance. They are doing it for one reason: it saves time.
That reality changes the adoption equation. Districts are no longer deciding whether AI will enter their systems. It already has. The real question is whether leaders will shape its use or leave educators to navigate both opportunity and risk on their own.
Nandi reinforces the urgency from another angle. Generative AI lowers the barrier to entry by using natural language as the interface. When the technology speaks the same language as its users, adoption accelerates. That accessibility creates enormous potential for personalization and productivity, but also increases the need for clear leadership and guardrails.
“When the interface becomes natural language, AI stops feeling like a system you have to learn and starts feeling like a tool you can use.” Narayan “Nandi” Nandigam, Vice President and Global Head of Services and Education, Infosys
AI as a productivity accelerator, not “another edtech tool”
One of the strongest throughlines in the conversation is the rejection of AI as just another initiative layered onto already crowded agendas.
“Technology doesn’t causally impact increased academic performance,” Ryan says. “Teaching and learning do.” In his view, AI belongs in the productivity suite, not the edtech category. Its value lies in saving time, improving efficiency, and expanding access so educators can focus more deeply on instruction, relationships, and student engagement.
Gilfus builds on this idea by naming a familiar tension. Leaders often look for narrow proof points to justify AI adoption, while AI’s real impact is distributed across instruction, operations, and enterprise workflows. Measuring the right things, rather than expecting AI to single-handedly raise test scores, becomes critical.
Two examples that make the impact tangible
When Gilfus asks for concrete, tactical examples, Ryan offers two that resonate precisely because they are ordinary.
The first involves his daughter, a teacher preparing to teach 7th grade science for the first time. She was mapping out a pacing guide manually across months. Together, they used AI with state standards, the school’s instructional framework, and the district calendar to build a yearlong plan aligned to required outcomes. From there, they moved into unit plans and lesson plans that incorporated differentiation and established instructional frameworks.
“It wasn’t new work,” Ryan notes. “It was work she already had to do. It just made it easier.”
The second example is even more personal. A different teacher told Ryan, through tears, “I can finally be sick.” She had avoided taking time off because substitute plans in her subject area often fell apart. AI helped her create clear, usable lesson plans that a substitute could follow. The result wasn’t efficiency for efficiency’s sake. It was flexibility, resilience, and humanity in a demanding profession.
Leadership and values must guide adoption
As the conversation turns to ethics and responsibility, Gilfus presses both guests on what leaders should prioritize.
Ryan is unequivocal. “This isn’t an IT initiative.” It is a leadership initiative rooted in values. Education, he reminds listeners, is fundamentally relational. Districts must define what they value about great teaching and learning, then ask how AI enhances or diminishes those values.
“If our core value becomes making education cheaper at the expense of relationships or creativity,” Ryan warns, “we’re starting in the wrong place.”
Nandi complements that perspective with a practical responsible AI framework, emphasizing transparency and explainability, equity-first design, data privacy and consent, human-in-the-loop oversight, and ongoing professional learning. AI, he stresses, is meant to augment educators and administrators, not replace them.
Rigor, originality, and learner agency
Gilfus raises a concern many educators share: how do schools maintain rigor and originality when AI can generate content instantly?
Ryan reframes the issue with a line that lingers. “Hallucination is a feature, not a flaw.” Generative AI is designed to produce plausible language, not guaranteed truth. Treating it like a calculator is a mistake. Treating it like a thought partner can enhance creativity, reasoning, and critical thinking, as long as humans remain accountable.
“As AI takes on routine tasks, learning can shift toward reasoning, ethics, and conceptual understanding.”— Narayan “Nandi” Nandigam, Vice President and Global Head of Services and Education, Infosys
The implication is clear. Districts must explicitly teach verification, questioning, and judgment. AI does not remove the need for thinking. It raises expectations for it.
Nandi adds that this shift has the potential to move learning away from rote tasks and toward higher-order skills such as reasoning, ethics, and conceptual understanding. In that sense, AI forces education systems to clarify what they truly value and assess.
Infrastructure, governance, and the enterprise reality
Gilfus intentionally shifts the conversation to what sits underneath all of this: data, infrastructure, and governance.
Ryan emphasizes that AI adoption must be enterprise-wide, with executive leadership rather than sole IT ownership. He points to AI maturity models as a practical way for districts to assess current state, define future goals, and build roadmaps supported by evidence and measurable indicators.
Nandi expertly highlights common pitfalls: fragmented systems, poor integration, and uneven data quality. Drawing on his experience supporting large-scale transformations across industries, he notes that districts often underestimate the foundational data work required before AI can be used responsibly at scale. Without that foundation, even well-intentioned AI initiatives struggle to move beyond pilots.
He outlines key building blocks districts should consider, including governance structures, data foundations, responsible AI policies, change management, and flexible technology architectures that can evolve as tools and models change.
“AI outcomes are only as strong as the data foundation underneath them. That work is often underestimated, but it’s essential.”— Narayan “Nandi” Nandigam, Vice President and Global Head of Services and Education, Infosys
Stephen Gilfus’s closing challenge: start now, but start wisely
Gilfus closes the episode with a sense of urgency grounded in history. Referencing Moore’s Law, he asks listeners to consider the magnitude of technological change from 1963 to today, then project forward.
If change continues to accelerate, what happens to districts that wait?
His message is not reckless adoption. It is deliberate action. Start small. Secure data early. Experiment responsibly. Learn by doing. Because standing still is no longer a neutral choice.
From experimentation to impact
Taken together, Gilfus, Ryan, and Nandi make a compelling case: AI in Education is no longer theoretical. It is already reshaping how educators plan, teach, and manage systems. The real question is not whether AI will be used, but whether it will be guided by clear values, strong governance, and an unwavering commitment to human-centered learning.
Moving from experimentation to impact requires leadership more than technology. And that work, as this conversation makes clear, needs to begin now.
Helpful Resources
-
Enterprise AI for Education: Success Stories
Real-world examples of how AI is being implemented at scale across education systems, with a focus on impact, governance, and responsible use.
AI for Education. For questions related to enterprise AI in education and large-scale implementation support, contact Mitrankurm@infosys.com. - K–12 Generative Artificial Intelligence (Gen AI) Readiness Checklist
Developed by the Council of the Great City Schools (CGCS) and the Consortium for School Networking (CoSN), this checklist helps districts assess readiness across leadership, policy, data, and implementation
This CoSN podcast episode was sponsored by Infosys.
Subscribe to edCircuit to stay up to date on all of our shows, podcasts, news, and thought leadership articles.




