AI in UK schools: Where we Really are – and What Good Looks Like Next
The honest picture? AI is already threaded through UK education – quietly in the background (spelling, translation, accessibility tools), loudly in the headlines (chatbots, copilots), and increasingly in the day-to-day of lesson prep, feedback, and admin. The opportunity is not “AI for AI’s sake”, it’s purposeful adoption, where relevant and secure, that buys back teacher time, supports inclusion, and enriches learning. The risk is equally familiar: chasing shiny tools without a clear reason, weak evaluation, and then discovering we’ve added workload, muddied safeguarding, and widened gaps. The difference between those two paths is leadership clarity, proportionate governance, and practical AI literacy for everyone who’ll actually use the stuff.
Leadership teams that are making the most progress tend to start by naming a small number of concrete problems that matter in their context – things like reducing the time teachers spend drafting differentiated tasks, tightening the feedback loop in writing, improving access for EAL learners, or unblocking SEND support with quicker first drafts of plain-English plans. They set these against the School Development Plan, keep a steady North Star (“what would success look like for students and staff?”), and only then survey the options. That simple order – purpose before product – means Trustees can hold the line on value and equity, finance can see the total cost of ownership, and staff know why something is coming in and how it will be judged. No drama, just joined-up decision-making.
It helps to treat “AI literacy” as a culture rather than a one-off CPD day. I often use the analogy of you wouldn’t expect Manchester United to train for just one day at the start of the season, but then after last year’s performance, that might actually have been the case! Staff need a shared language for how these systems work, what they’re good for, and where they fail. That’s everything from prompt craft to bias, data use, and when you must keep a human firmly in the loop. The schools doing this well schedule short, regular practice windows – “show, try, reflect” – and grow a network of digital champions across subjects, rather than relying on one heroic AP for digital to carry the world. Crucially, they normalise talking about limitations as much as wins. If the SLT says, “We’re testing this to learn, not because we’ve already decided,” staff trust goes up and better ideas surface faster.
Safeguarding and compliance deserve to be baked in at the start, not patched on later. None of this is entirely new – data minimisation, retention and deletion, doing a DPIA, transparency, and security are familiar principles – but the scale and opacity of AI flows mean we must be explicit. I always advise that schools should insist on clarity about where data goes, whether it trains any models, how long it’s kept, and who can see it. That’s not always easy, but it’s good to have the aspiration. In student-facing use, lean towards anonymising content by default and be clear when named data is allowed (and why). A living AI policy helps – short, plain English, visible to staff and parents – covering acceptable use, tool approval criteria, disclosure expectations for students’ work, incident reporting, and cycles for review. The tone matters: not “thou shalt not”, but “here’s how we use this safely and well in our school”.
If that sounds a bit dry, remember why it matters. Done thoughtfully, AI can make a visible dent in teacher workload and widen access. Teachers report quicker starts on schemes of work, more responsive formative feedback, and easier differentiation for mixed-attainment classes. TAs and SEND teams are using AI to generate social stories, simplify complex texts without losing meaning, and surface draft targets that teachers then refine with their professional judgement. For students, accessibility features —such as captioning, text-to-speech, and language scaffolds—are becoming mainstream rather than add-ons. Where leadership ties these gains back to named priorities, staff can see the point and are more willing to engage.
Of course, the caveats are real. Hallucinations happen, especially when prompts are vague or tasks go beyond the model’s competence; bias doesn’t vanish just because an answer is fluent; some tools are indiscriminate with data; and enthusiasm can slide into over-reliance. You also can’t ignore academic integrity. The answer isn’t to pretend students will never use AI, but to be explicit about when they can, how they must acknowledge support, and how you’ll assess understanding (more process, more viva-style checkpoints, and smarter task design (problem-based learning, anyone?). A simple line in the sand helps: AI may support idea generation, structure, and examples; it must not substitute for your own analysis or original evidence. Staff need backing to redesign tasks and rubrics accordingly.
If you’re wondering what separates the schools that “get there” from those that stall, it’s surprisingly ordinary: they try small, measure properly, and change course without embarrassment. In practical terms, that looks like scoping short pilots against a baseline (“average minutes spent planning a differentiated reading task”, “student feedback turnaround time”, “staff confidence levels”), agreeing on success criteria upfront, and collecting both numbers and narratives. They give proper support during the pilot – time, templates, coaching – so they’re testing the approach, not testing people’s patience. Then they decide publicly: adopt, iterate, or stop. The stopping can be where a lot of credibility is gained; when staff see that tools with weak evidence don’t linger, they trust the process and are more willing to test the next thing.
Governance is your guardrail here. Good governing or trust boards ask purpose-first questions and look for evidence that the change is ethical, effective, and sustainable. They absolutely don’t micromanage the pedagogy; they do seek assurance that risks are understood and mitigations are active. Brief, regular reporting helps – usage patterns, outcomes against the baseline, risks and incidents, stakeholder feedback, and budget implications. That gives governors enough sight to fulfil their duties without burying the school in paperwork. It also keeps equity visible: who benefits, who might be left out, and what you’re doing about it (from device access to communications with families).
The curriculum angle matters too. Treat AI literacy like reading the web: it belongs everywhere. Ultimately students need to learn to plan, prompt, critique, and cite; they should also learn how to not use AI, and why. That’s not about catching them out; it’s about teaching them to be thoughtful, ethical users who know how to check and challenge machine outputs.
There’s a national story unfolding in parallel. The evidence base is developing, DfE guidance continues to evolve, and vendors are rapidly iterating. That can feel chaotic, but it’s also why local clarity of purpose is the safest bet: you control the seams that matter in your setting – professional culture, pedagogy, and how you handle data. If the external picture shifts (and it will), you already have the muscles to adapt. Think of it as an annual cycle rather than a one-and-done plan: revisit your purpose, refresh the policy, re-check risks, test the next wave, and keep closing the feedback loop with staff, students, parents, and governors. Evolution beats revolution.
If you want a few practical touchstones to hold in mind as you move, try these. Keep the “North Star” visible in every decision, so everyone can see how a tool maps to teaching, inclusion, safeguarding or workload. Default to transparency – about what you’re testing and why, what the data flows look like, and how you’ll judge success – because sunlight deters performative governance and builds trust. Insist on comparators rather than single-vendor rushes, so you’re choosing on educational merit and data handling, not just on funky demos. Reserve time for staff to practise and reflect, otherwise “AI support” becomes another invisible evening task.
Finally, let’s not lose sight of the human dividend. The best outcomes we are just starting to see aren’t about clever prompts; they’re about teachers having more time to teach, more space to get to know their children, and more energy for their role. It’s also about TAs and SENDCOs feeling equipped, not overwhelmed. They’re about students seeing technology used with integrity and purpose and learning to do the same. That’s the version of “AI in education” I’m happy to champion: safe, transparent, proportionate – and squarely in service of people and learning.
Beyond that, I hope to see you at the Toddle event at Pall Mall on November 7th where we will be talking lots more about effective and impactful use of Ai and Edtech.









