top of page

From Polarisation to Possibility

  • 12 hours ago
  • 4 min read

A recent Financial Times’ analysis of a Co‑operative Election Study showed that social media over‑represents extreme political content, while AI chatbot conversations, when modelled across the US electorate, tend to pull views away from the fringes. In other words, the online spaces that dominated politics for the past decade have amplified anger and outrage, whereas today’s AI assistants often steer people toward more moderate, qualified positions.


For those of us who lived through the rise of hyper‑targeted digital campaigning, this is a striking reversal. Working for the Labour Party in the late 2000s and early 2010s, first as a Regional Organiser and then as Head of Digital, I saw first‑hand how social platforms rewarded the sharpest edges of political messaging. The incentives were simple: stronger emotions meant more clicks, more shares and ultimately, more influence.



A political organiser’s view of AI


My own political grounding came not in a think‑tank or a lab, but on the doorstep. As an organiser for a Labour MP, I spent years listening to people who did not fit neat ideological boxes: a trade‑unionist worried about immigration, a small business owner committed to public services, a lifelong Conservative voter quietly impressed by a Labour candidate’s local record. That experience taught me that most citizens are more nuanced - and more open to persuasion - than their social media feeds suggest.


Later, as Head of Digital for the Labour Party, my job was to translate those real‑world conversations into national‑scale digital strategies. We were early adopters of social media campaigning, using data to target messages, build communities and respond in real time to events. We also saw the dark side: the way outrage could drown out deliberation, and how easy it was for debate to collapse into slogans.


The FT’s chart captures this distortion clearly, contrasting the bell‑curve of public opinion with the spikier profile of political content online. It chimes with what many campaigners instinctively know: digital channels have been brilliant at mobilising the already‑engaged, but far less effective at supporting thoughtful, good‑faith discussion.


Why AI might be different


The same analysis suggests that AI chatbots behave differently. When researchers used simulated voter personas to ask models about policy issues, the answers tended to moderate extremes rather than amplify them, with the “peaks” of left and right gently pulled back towards the centre. That does not mean AI is neutral or infallible - the models still reflect the data and design choices behind them - but it does hint at a technology whose default mode is to explain, qualify and contextualise rather than to inflame.


From a political strategist’s perspective, this matters. In my Labour roles, a huge amount of effort went into equipping activists and candidates with accurate, consistent information: briefing packs, Q&As, rebuttal lines, policy explainers. Generative AI, used well, can be an always‑on briefing room for every citizen, not just for party insiders. It can help people explore trade‑offs, test arguments and understand consequences in a way that social media threads rarely allow.


At Datnexa, we see this every day in the public sector work we do - from helping organisations make sense of complex data, to designing AI‑powered tools that support better decisions. The same disciplines that keep a political campaign honest - grounding claims in evidence, rigorously testing messages, understanding how different groups will interpret information - are essential when deploying AI into any democratic context.


Guardrails, not gatekeepers


However, we should resist the temptation to treat AI as a technocratic fix for political polarisation. The FT analysis reminds us that models can nudge people towards moderation, but it also shows that they are shaped by the objectives and guardrails set by their creators. If those guardrails are opaque, partisan or driven purely by commercial incentives, we risk replacing one set of distortions with another.


My time in the Labour movement has left me with three principles that should guide how we use AI in politics and public life:

  • Transparency about how systems work, what data they use and where their limits lie.

  • Pluralism in who designs, governs and audits them - including voices from civil society, academia and across the political spectrum.

  • A relentless focus on the lived experience of citizens, not just abstract metrics.

  • These are political choices, not technical ones. They echo the debates Labour and other progressive parties have been having for years about media concentration, platform regulation and democratic accountability.


A constructive role for AI in democracy


If we get those choices right, AI can become part of a healthier information ecosystem rather than a new source of division. Imagine civic chatbots embedded in local authority websites, helping residents navigate services while also explaining how decisions are made and how to influence them. Picture election‑time tools that allow voters to explore party programmes in depth, compare trade‑offs and see how policies affect people like them - without the emotional manipulation baked into much social media advertising.


The FT graphic should not lull us into complacency, but it does offer grounds for cautious optimism. Having spent a decade in Labour politics and now leading Datnexa’s work at the intersection of data, AI and public service, I believe we have a window to build digital tools that serve democratic conversation rather than undermine it. That will only happen if we bring political experience, ethical instincts and technical expertise into the same room - and if we treat citizens not as targets to be nudged, but as partners in shaping the future of AI.

  • TikTok
  • Youtube
  • LinkedIn
  • Twitter
  • Instagram
  • Facebook

Big Ideas. Bold Tech. Better Tomorrow.

Get project learnings, technology insights, and useful tips for AI, data and innovation 👇

© 2025 by Datnexa Ltd 

bottom of page