Is time running out for ‘safe’ AI?
- Datnexa HQ
- 9 hours ago
- 3 min read
A recent article in the Guardian stated that, according to AI safety expert David Dalrymple, the word ‘may not have enough time’ to prepare for AI safety risks as rapid advances outpace efforts to bring in controls and mitigate risks. While Datnexa does not currently operate on a global basis we do have vast experience in developing and deploying AI tools and resources in partnership with local authorities.
For local authorities AI safety cannot stay abstract, because the risks and the opportunities are already arriving in frontline services today. The choice is not between “fast AI” and “safe AI”, but between unmanaged experimentation and governed systems that demonstrably improve lives.

From existential risk to today’s front door
The warnings about not having enough time to prepare for AI safety often focus on frontier models and global risks, but local government is already exposed in quieter, nearer ways.
For councils, the urgent questions are: Who is accountable when an AI suggestion shapes a social care decision, and how do we protect residents while still unlocking better outcomes?
This is where Datnexa positions AI not as a distant threat, but as something concrete: tools that reduce friction for practitioners, free time back to human care, and are designed to be governed from day one.
Real, governed AI in production
Datnexa’s work starts from a simple principle: if social workers, housing officers or commissioners do not actually use an AI tool in their daily work, it is not a success, no matter how advanced the technology.
That is why projects are co-designed with councils, with clear guardrails on safety, safeguarding and information governance agreed at the outset and built into the architecture rather than added later.
By staying AWS‑native wherever possible, using services such as analytics and dashboarding tools to create a single “pane of glass”, Datnexa helps public bodies avoid the sprawl of overlapping pilots and shadow systems that make governance harder instead of easier.
Hey Geraldine: safety as a lived practice
“Hey Geraldine”, the AI assistant for adult social care developed with Peterborough City Council, is an example of how safety becomes practical, not theoretical.
Built from the expertise and voice of an experienced occupational therapist, it gives practitioners 24/7 access to advice on technology-enabled care, while operating within strict GDPR‑compliant, encrypted infrastructure and council-controlled data boundaries.
Because it was co-designed with frontline staff, Geraldine is used to answer real questions that would otherwise cost precious time or be left unresolved, and every interaction can be monitored, audited and improved through a clear insights dashboard.
The focus is not replacing professional judgement, but creating more of it: surfacing options faster so practitioners can spend more time with the people they serve.
Targeted prevention: governance at board level
The National Frailty Index (NFIX) and related prevention work show what happens when AI is tied to real cases, outcomes and board‑level accountability.
By analysing millions of data points to flag people at risk of falls and other adverse outcomes, NFIX shifts conversations from abstract analytics to specific residents and interventions, with governance conversations happening alongside clinical and operational ones.
Because results are presented in accessible, visual formats, leaders can interrogate not only performance but also fairness, coverage and unintended effects, strengthening both assurance and impact.
A discipline forged in real services
Datnexa’s approach has been shaped by complex work, where the priority has always been “what changed for residents and staff”, not the novelty of the model used.
At events such as the Local Government AI Summit and LocalGovCamp North, Datnexa has shared open case studies, from drafting EHCP reports to practical adult social care assistants, to help other councils learn quickly, avoid common pitfalls and focus on safe adoption at pace.
Those experiences reinforce a quiet lesson: robust AI in local government depends less on dramatic technology and more on humility, repeatable patterns, and shared learning between councils, suppliers and residents.
What leaders can do now
For local government leaders reading global AI safety warnings, the response does not have to be paralysis; it can be a shift to governed, impact‑led delivery.
Steps that Datnexa already supports include:
Start with a tightly scoped, human-centred use case in social care, housing or prevention, with clear success measures for residents and staff.
Embed governance by design: define safeguards, escalation routes and data protections before build, and use an auditable stack to keep control.
Invest in workforce confidence: use live tools like Hey Geraldine and prevention dashboards as prompts for training, reflection and continuous improvement, not as black boxes.
The world may or may not have time to perfect AI safety frameworks at the frontier, but local government does have time, and obligation, to insist that every AI system in its services is safe, governed and genuinely useful from day one. Datnexa’s role is to help make that a practical reality in production, not just a promise on paper.





