When AI Invents Reality: Why Public Services Need Governance, Not Hype
- 1 hour ago
- 4 min read
In January 2026, tourists descended on a remote Tasmanian town searching for hot springs that never existed. An AI-generated blog post had fabricated "Weldborough Hot Springs" complete with descriptions of therapeutic minerals and secluded forest retreats. The local pub owner received five calls daily from confused travellers. The tour operator's response was telling: "Our AI has messed up completely."
The company had outsourced content generation to a third party using AI with no verification process, no human oversight and no accountability, until tourists arrived at a non-existent destination.
Public services face precisely this challenge, only with higher stakes. When AI hallucinations affect social care assessments, housing decisions or safeguarding interventions, the consequences are not wasted holidays, they are broken lives. Yet the governance structures that prevent these failures remain largely absent from public sector AI deployment.

Safety Before Speed
While a UK Government report from January 2005 stated that archaic tech was leading to the public sector missing out on £45 billion in annual savings, less than half of UK adults trust the public sector to use AI responsibly. Yet 63% say AI makes them value human work more, revealing a fundamental anxiety - we are deploying technology faster than we're learning to govern it.
The human cost will be real if these issues are not dealt with. The potential of these tools to make transformational and positive change can be seen in contexts such as social workers making safeguarding decisions, housing officers assessing vulnerability, and commissioners allocating scarce resources. The tools can work well in these environments but not if AI errors compound existing inequalities. Systems trained on biased data perpetuate discrimination. Black-box algorithms undermine due process. Fully automated decisions become impossible to challenge.
Tasmania Tours had no system to verify AI outputs. Public services cannot afford the same negligence.
Three Non-Negotiables for Responsible AI
First: Transparency with accountability. Citizens have a democratic right to understand how government decisions affecting their lives are made. This means documenting what AI systems do, documenting how they work, and naming who is responsible when outcomes are wrong. The UK government's Algorithmic Transparency Standard exists but implementation remains inconsistent. Organisations must publish algorithmic impact assessments and establish accessible channels for challenging decisions.
Second: Human oversight at critical decision points. The EU's GDPR and AI Act already require significant human involvement for high-risk decisions. This is not about slowing innovation, it’s about preventing catastrophic failures. AI systems optimise for programmed goals without understanding context, ethics, or long-term impact. Social care requires professional judgment that balances competing needs and respects dignity. AI can surface information and suggest options, it cannot replace empathy and accountability.
Third: Ethical frameworks embedded in architecture, not retrofitted. Rigorous information governance must treat data as a safeguarding concern. Social workers should never share confidential information with public AI chatbots because such systems store and analyse inputs to improve their models. Systems must be designed with privacy by default: encryption, access controls, and architectural boundaries that prevent data leakage.
This discipline protects both citizens and organisations. Data Protection Impact Assessments and Service Agreements must be completed before systems enter testing. When organisations commit to this approach, they build public trust by demonstrating that AI serves people, not the reverse.
Real AI for Real People: Hey Geraldine
Peterborough City Council had a clear problem: Geraldine Jinks, an occupational therapist with 30 years of experience, was overwhelmed with questions from colleagues. Rather than purchasing an off-the-shelf product, Peterborough partnered with Datnexa to build a bespoke AI assistant in six weeks.
Governance came first. Data Protection Impact Assessments and processing agreements were completed before development began. The system is GDPR compliant, uses bank-grade encryption, and critically, data is never used to train external models. This AWS-native architecture maintains information within secure boundaries.
Results: 15 minutes saved per conversation, referral backlog cleared entirely, provided £250k+ saved in cost avoidance in the first year, and a 900% ROI achieved!
Staff adoption was high because the tool was built with them, not imposed from above. Hey Geraldine is now featured on ai.gov.uk as a best practice case study and shared at LocalGovCamp to help other councils learn.
The contrast is stark: Tasmania Tours released AI-generated content with no verification. Peterborough embedded expertise, built governance into architecture, and measured success by what changed for real people.
The Choice: Hallucinations or Accountability
Responsible AI is not slower AI, it’s AI that works because it has been designed with governance, tested with users, and deployed with accountability.
Organisations that succeed will:
Start with problems, not technology
Involve end users from day one
Prefer AWS-native services for unified governance
Measure success by outcomes for residents, not adoption metrics
The Tasmania hot springs will fade from headlines. The underlying challenge remains. As public services face intensifying demand and shrinking budgets, AI will become ubiquitous. The question is whether these systems will be governed responsibly or whether they will generate new categories of harm.
At Datnexa, we believe in big ideas and bold technology. But that tomorrow is not inevitable, it requires deliberate choices about how AI is designed, governed and deployed. It requires prioritising impact over hype and measuring success by what changes for real people in real services.
The hot springs of Weldborough will remain a myth. Public services have the opportunity to ensure AI becomes a trusted tool for better outcomes, not another failed technology initiative that erodes public confidence.





