Navigating AI's Ethical Frontier: Reflections from My Digital Catapult Mentoring Experience
- Adam
- Aug 11
- 5 min read
Updated: 7 days ago
By Adam Dustagheer
As Managing Director of Datnexa, I recently had the privilege of participating in Digital Catapult's responsible AI mentoring sessions as part of the Innovate UK BridgeAI programme. These sessions, designed to support startups in the construction sector, provided a fascinating glimpse into how we can collectively build more ethical and resilient AI solutions whilst addressing some of the most pressing challenges facing UK industry.

The BridgeAI Mission: Bridging Innovation and Implementation
The BridgeAI programme represents a significant £100 million investment in the UK's AI future, specifically targeting sectors with high growth potential but currently low AI maturity, specifically agriculture, creative industries, transport and construction. Having spent over two decades working with complex data challenges across government regulation, healthcare and professional services, I was particularly drawn to how this programme seeks to bridge the gap between AI developers and industry adopters.
What struck me most about the initiative was its emphasis on responsible AI adoption. At Datnexa, we've seen first-hand how transformative AI can be when implemented thoughtfully - our National Frailty Index (NFIX) for example, analyses millions of data points to predict falls among the elderly and exemplifies how AI can genuinely improve lives whilst maintaining ethical standards. The BridgeAI programme's commitment to fostering ethical AI practices aligns perfectly with our philosophy of using technology for genuine social good.
The Construction Sector Challenge
The construction industry accelerator focuses on a 14-week programme supporting UK-based startups, scaleups and SMEs in developing ethical AI and machine learning solutions. The sector faces numerous challenges - from design and compliance issues to the integration of diverse stakeholders across the architecture, engineering and construction (AEC) industry. Collaborating with industry leaders such as Foster + Partners, Versarien and Buro Happold, the programme tackles real-world problems including the 'Golden Thread' challenge of streamlining data entry and improving collaboration, as well as 3D construction printing innovations.
Having worked extensively with data integration challenges in previous roles, including my time at the Tony Blair Institute for Global Change and various healthcare organisations, I recognised many parallels between construction's data silos and the fragmented information systems I've encountered in other sectors. The construction industry's digital transformation mirrors the broader challenge of making AI work effectively in complex, multi-stakeholder environments.
The Art of Responsible Mentoring
The mentoring sessions themselves were structured around two key activities that I found particularly valuable. The first, aptly named 'What could go wrong?', involves a 60-minute deep dive where mentors provide startups with a concrete understanding of the specific risks associated with their technology and proposed business plans. This session begins with a 10-minute company overview, followed by targeted questions about their AI systems, and culminates in 40 minutes of theoretical 'red-teaming' where mentors explore potential misuse and adversarial activities.
As someone who is regularly navigating the complexities of implementing AI in sensitive sectors like healthcare and public services, I found this structured approach to risk identification incredibly valuable. During these sessions, I drew upon my experience with data privacy challenges from my time at the Royal College of Obstetricians and Gynaecologists, as well as insights from developing Datnexa's own AI solutions for local government applications. The startup founders were remarkably receptive to discussing potential pitfalls, demonstrating a maturity that bodes well for the responsible development of AI in construction.
From Risk to Resilience
The second component, 'From risk to resilience', involved hour-long informal discussions focused on transforming identified risks into actionable management strategies. These sessions proved equally enlightening, as they moved beyond theoretical concerns to practical implementation strategies. The collaborative nature of these discussions reminded me of the iterative approach we take at Datnexa when developing solutions like the NFIX, constantly refining our models based on real-world feedback and emerging ethical considerations.
What particularly impressed me was how these sessions created space for startups to discuss their concerns in a non-judgemental environment whilst receiving feedback on existing mitigation strategies. The mentors' diverse perspectives encouraged innovative thinking about both product development and business planning, extending the impact well beyond the accelerator programme itself. This aligns with my belief that responsible AI development requires ongoing dialogue between technologists, industry experts, and ethicists.
The Broader Implications
These mentoring sessions highlighted several crucial aspects of responsible AI development that extend far beyond the construction sector. First, the importance of creating safe spaces for discussing AI risks cannot be overstated. Too often, organisations rush to implement AI solutions without adequately considering potential negative consequences. The structured approach taken by Digital Catapult provides a valuable framework for any organisation looking to adopt AI responsibly.
Second, the collaborative model demonstrated in these sessions - bringing together experienced mentors with innovative startups - creates a powerful knowledge transfer mechanism. My own background spanning political campaigns, healthcare digitalisation and now public health AI, gives me a unique perspective on how technology adoption patterns vary across sectors. Sharing these insights with construction-focused startups helps them anticipate challenges they might not otherwise consider.
Looking Forward
The BridgeAI programme's emphasis on three core areas: product and market readiness; responsible and ethical AI and technology advancement, provides a comprehensive framework for sustainable AI adoption. As I reflected on my mentoring experience, I was struck by how this approach mirrors our own methodology at Datnexa, where we balance technical innovation with ethical considerations and practical implementation challenges.
The programme's focus on fostering strong ethical foundations whilst building effective risk mitigation strategies resonates deeply with our work in local government AI applications. Whether we're helping councils implement AI for social care interventions or supporting public health initiatives, the principles explored in these mentoring sessions - transparency, accountability and continuous risk assessment - remain consistently relevant.
The Human Element in AI Development
Perhaps most importantly, these sessions reinforced my conviction that successful AI adoption requires more than just technical excellence. My past and current work has demonstrated that the most promising AI solutions emerge when technical capability is combined with deep sector knowledge, ethical awareness, and genuine commitment to addressing real-world problems. This human-centred approach to AI development will be crucial as we continue to navigate the evolving landscape of artificial intelligence across all sectors of the UK economy.
The mentoring experience and our continuing work with Digital Catapult has left me optimistic about the future of responsible AI in the UK. Programmes like BridgeAI, with their emphasis on ethical development and cross-sector collaboration, are laying the groundwork for an AI-enabled economy that genuinely serves society's interests whilst driving innovation and growth. As we continue to develop solutions at Datnexa, I look forward to applying the insights gained from these sessions to our own work in transforming public services through responsible AI implementation.