The Delta Issue #79
AI Is Moving Fast. John Bailey Shares How Education Leaders Can Keep Up.
Hi y’all, Jessica here.
AI has quickly become a household feature in education, with 85% of teachers and 86% of students using it in some form.
At the same time, state leaders are being asked to “lead on AI,” even as the technology itself seems to change by the hour.
I had the chance to sit down with John Bailey , former director of education technology at the US Department of Education and one of Washington’s most influential people in policy. By his own admission, John was in the top 0.1% of ChatGPT users in 2025—spending more time on the platform than OpenAI’s founder, Sam Altman. He has worked at every level of government, from state education agencies to the White House, and today he helps leaders across education, healthcare, and the future of work think through rapid technological change as the founder of Vestigo Partners (acquired by Broadstone) and a senior fellow with American Enterprise Institute .
As I try to make sense of what AI means for the future of teaching and learning, he’s someone I personally look to for guidance. In our conversation, I ask John what state education leaders should be watching right now, what risks deserve serious attention, what opportunities already exist, and what it will take to make sure AI supports learning.
Watch our full conversation here:
The transcript below has been edited for brevity and clarity.
Jessica Baghian: Hello everyone. My name is Jessica Baghian . I’m President at Watershed Advisors, and we’re joined today by John Bailey to talk about all things AI and education.
Before we jump in, John, why don’t you do a quick introduction?
John Bailey: Great to be with you. I’m John Bailey, a non-resident senior fellow at the American Enterprise Institute . I also work with a number of philanthropies, investors, and CEOs across education, healthcare, mental health, and the future of work. AI cuts across all of that.
Earlier in my career, I worked at the White House, the Departments of Commerce and Education, but I got my start in Pennsylvania working for Governor Tom Ridge at the Department of Education. So it’s a real joy to be here.
Jessica: You’ve worked at every level of government, and you also happen to be my personal go-to source on AI. I’m a bit of a tech dinosaur myself, so I’ve learned a lot from following how you think about and use these tools.
Let’s start simple: What does your relationship with AI actually look like in daily life?
John: Honestly, it’s probably too close. When everyone else got their Spotify Wrapped, I joked that I got a ChatGPT Wrapped. I was apparently in the top 0.1% of users. That’s probably a cry for help.
But more seriously, I use a range of tools—ChatGPT, Claude, Gemini, Perplexity—not just for writing, but for everyday problem-solving. Cooking, shopping, comparing options, making sense of information overload.
When my mom had a recurrence of lung cancer, I built AI assistants to help us navigate treatment options and manage side effects. In one case, an AI tool even helped surface a medical error. The more you use these tools, the more you realize they’re not just about generating text; they’re about helping people make sense of the world.
Jessica:
There’s a big push right now for state education chiefs to “lead on AI.” I’m not opposed to that, but the pace of change is so fast. Asking leaders who are experts in teaching kids to read to also shepherd a rapidly evolving technology feels like a lot. Maybe even unreasonable.
How are you thinking about that tension?
John: That concern is completely valid. The pace is disorienting for everyone. One framework I find helpful comes from a philanthropy CEO who predicts that by around 2027, AI systems will have four major capabilities:
- Expert-level intelligence across multiple domains
- Multimodal understanding and interaction
- Advanced reasoning over longer periods of time
- Embodiment and action through robotics or agents
If this is where things are headed, what does that mean for our system, our teachers, and our students?
For state leaders, the focus should be on when humans should always be in the loop. Right now, I think it’s too early for heavy-handed, top-down mandates. But it is appropriate to require districts to develop their own acceptable use policies, much like we did with the internet or Wikipedia.
Jessica: I’m also thinking about the broader context: cell phone bans, social media lawsuits, and the real harm we’ve seen from under-regulated tech.
AI feels like it could be even more disruptive. So how do you think about regulation? What’s responsible without stifling innovation?
John: We’re already seeing early warning signs. Social media optimizes for attention; some AI systems risk optimizing for attachment. Empathy can be powerful, supporting teachers or students, but it can also distort social norms.
There are tragic cases where kids, and even adults, form unhealthy relationships with AI systems. Some of this is poorly understood even by the companies building these models.
Right now, policy responses borrow from the social media playbook: age gates, parental controls. That’s a start, but it’s insufficient. I’ve argued that when frontier labs test new models for risks, like nuclear or biological harm, they should also be required to test for mental health impacts. States could push for this too.
We shouldn’t throw out the benefits of AI, but we do need to take these risks seriously and proactively.
Jessica: So who ultimately holds responsibility here—state chiefs, governors, districts, the federal government?
John: Honestly, all of us. This technology affects the entire economy and society. There’s personal responsibility, institutional responsibility, and government responsibility at every level.
What’s encouraging is that concern is bipartisan. Attorneys general are investigating. Lawmakers are paying attention. What we need now are smart policy experiments and better tools.
It’s also helpful to distinguish between different types of AI: AI companions designed to form relationships, general-purpose AI tools, and AI systems deliberately deployed in structured environments like schools. Those distinctions should inform what’s allowed, gated, or prohibited.
Jessica: Is anyone getting this right yet?
John: Some states are on the right path. Louisiana is approaching AI with curiosity and pilots. Virginia has required districts to develop acceptable use policies. No one has it fully figured out, but many are moving thoughtfully.
States should be using AI internally to make accountability and report-card data far more accessible. Instead of parents having to decode dense dashboards or technical reports, AI could allow them to ask questions of school data and get clear, plain-language answers. That kind of use is already well within reach.
Jessica: Let’s look ahead. Ten years from now, what does responsible success with AI in education look like?
John: I think AI agents will be central. Teachers will effectively have teams of digital assistants—handling planning, feedback, coherence, and administrative work. These agents will collaborate with each other and with educators.
It sounds like science fiction, but it’s closer than people realize. If we get this right, it could dramatically increase teacher capacity and instructional quality, if we steer it intentionally.
Jessica: Final question. For educators or state leaders who are just beginning—who aren’t deep in the AI world—where should they start?
John: Think of AI as a highly capable remote employee. Most people get poor results because they treat it like a search engine.
Give it context. Give it direction. Tell it why the task matters and how you want the output formatted. Write prompts the way you’d write a scope of work for a consultant.
And if you don’t know how to write a good prompt? Ask the AI to write one for you. That alone gets most people 80% of the way there.
Jessica: Any final thoughts?
John: I spend a lot of time thinking about AI risks, but I’m also genuinely optimistic. This could unlock enormous opportunities for teachers and students. But it won’t naturally move in the right direction. It has to be steered.
That’s a leadership challenge, not a technical one.
Jessica: Thank you, John—and thanks to everyone for joining us.
