From ambition to impact: what the 2026 Public Sector AI Index tells us about AI in the UK
- 55 minutes ago
- 4 min read
The Public Sector AI Adoption Index shows a UK public sector that is curious, motivated and increasingly trained, but not yet systematically supported to use AI in the flow of real work.
74% of public servants globally now use AI, with most adoption happening in the past year. The UK is part of this wave.
The UK scores relatively well on education. 37% of UK public servants report receiving some form of AI training and confidence is high, with 75% stating that they find AI easy to use.
Yet the Index places the UK in a group where AI is “present but unevenly embedded, often constrained by infrastructure or unclear permissions”.
In other words, UK staff are learning about AI, but they are often left to join the dots between training, tools, governance and day‑to‑day casework.
For Datnexa, this is the pivotal moment. Ambition is no longer the problem. The challenge is building real, governed AI in production, AI that lives inside social care, housing, prevention and related services, not just in pilots and slide decks.

Impact, not hype: what UK frontline workers actually need
The Index is clear that enthusiasm is high where people see AI as useful, empowering and clearly allowed, and where it saves time on real tasks. It is also clear that “not knowing where to start” remains a major barrier in low‑education environments, even when training has technically been provided.
For UK public services, that means:
AI has to reduce friction in core processes, assessments, case recording, triage, communication with residents, not just generate generic text.
AI must free professional time for judgement, relationship‑building and safeguarding, not add another dashboard or inbox to manage.
AI must make a difference on the cases that keep leaders awake - missed early‑help opportunities, placement breakdowns, repeat homelessness, preventable crises.
Datnexa’s Hey Geraldine assistant for adult social care workers is one example of how this looks in practice. Built from real staff expertise and deployed safely in councils, it supports workers with evidence‑based answers, drafting and decision support that map directly to local practice, so a practitioner experiences AI as “part of how I do the work here”, not as a generic chatbot.
The lesson from Hey Geraldine and similar deployments is simple: when AI is anchored in real workflows and local context, adoption stops being an abstract change programme and becomes a pragmatic choice made hundreds of times a day by workers under pressure.
Governance by design: closing the shadow‑AI gap
The Index highlights a stark pattern - where enablement and empowerment are low, public servants turn to personal tools and “shadow AI”.
In low‑enablement environments, 64% of enthusiastic AI workers use personal logins, and 70% use AI for work tasks without their manager knowing.
50% of public servants say concerns about data security or privacy are a barrier.
This is exactly the risk UK organisations now face: the more staff learn about AI, the more they will quietly use ungoverned tools to get through their workload.
Datnexa’s position is that this cannot be fixed by more policy documents alone. Governance has to be designed into the way AI is procured, built and embedded:
Safety, safeguarding, information security and ethics are treated as first‑order requirements, not as a final sign‑off step.
Approved tools are easy to access and clearly better than the shadow alternatives.
Frontline staff are clear on what they can and cannot do with AI, in plain English, and they know exactly who to ask when something feels uncertain, echoing the high‑empowerment environments highlighted in the Index.
This is how Datnexa approaches every implementation. From Hey Geraldine in adult social care to targeted prevention work (NFIX) that sits in sensitive, multi‑agency environments. Governance is not a separate project, it is part of the architecture.
Embedding, not experimentation: AWS‑native discipline in the UK context
The Index makes an important point about embedding: where AI is not visible in everyday systems, staff conclude that their organisation, and often their country, is falling behind, weakening confidence in leadership over time. Conversely, in high‑embedding environments, public servants of all ages (including 55+), are more likely to save significant time using AI.
For UK organisations wrestling with legacy estates, this is where a disciplined, AWS‑native approach matters:
Cloud‑first infrastructure can “leapfrog” old constraints and give workers rapid access to approved AI tools.
A preference for AWS services means a single, trustworthy solution rather than a patchwork of overlapping tools that are hard to govern and harder to scale.
Enterprise‑grade AI can be integrated into case management, housing systems and line‑of‑business applications, not just added as yet another standalone portal.
Datnexa’s work on NFIX and other targeted prevention programmes shows what this looks like: board‑level and workforce‑level conversations about data science that land because they are tied to real cases and outcomes, underpinned by robust, cloud‑native data and AI pipelines rather than one‑off models.
This is how the UK can move from “patchy progress” to a pattern where AI is genuinely embedded in social care, housing and prevention services at scale.
True lessons learned: towards a better tomorrow for UK public services
The Index emphasises that “ambition, by itself, does not deliver impact”; what separates leading countries is whether public servants have access to effective tools, clear rules, relevant skills and embedded systems. That is entirely aligned with Datnexa’s experience of complex, high‑stakes AI and service projects in UK local government.
From that experience, a few concrete lessons for UK leaders:
Start where the pressure is highest. Choose use‑cases that free time and improve outcomes in visible, high‑stakes areas - adult social care, children’s services, housing, prevention - so staff feel the benefit quickly.
Build permission and safety together. Clear “safe harbour” rules for low‑risk uses, coupled with robust guardrails for sensitive decisions. Unlock the enthusiasm and optimism the Index identifies while avoiding unmanaged risk.
Invest in skills that are tied to real tools and roles. Training linked to specific, governed AI assistants and workflows is far more powerful than generic awareness alone, echoing the Index’s finding that role‑specific training drives confidence and uptake.
Treat every deployment as a learning system. Use case data, worker feedback and outcome measures to refine prompts, policies and models over time, not just at launch.
This is the heart of Datnexa’s narrative: Big ideas about what AI can do for public services, bold tech deployed with AWS‑native discipline and governance by design, and, ultimately, a better tomorrow where AI is simply “how we work now”, freeing UK public servants to focus on the human work only they can do.





