Getting AI-ready: Reflections from our Manchester community event

Getting AI-ready: Reflections from our Manchester community event
Apr 24, 2026
A lookback on our regional community event bringing investigation and intelligence professionals together to share practical perspectives on AI readiness, trust and responsible adoption.

This week, we hosted a focused community event in Manchester exploring what it takes to be genuinely AIready in intelligence and investigation work. 

The day brought together 45 investigation and intelligence professionals from 20 organisations, joining from across the region and beyond. Rather than a large conference setting, the event was intentionally small and interactive, creating space for practitioners working in complex, highrisk environments including fraud, regulation, enforcement and safeguarding to share openly.  

Despite differences in remit and maturity, the challenges discussed were consistent: increasing volumes of information, heightened expectations around accountability and transparency, and growing pressure to adopt AI in ways that are effective, ethical and defensible. 

From AI capability to AI readiness

Across the day, the conversation stayed grounded in practice. Rather than treating AI as a standalone capability or a shortcut to efficiency, sessions focused on the foundations required for responsible adoption. These included data quality and integrity, governance and oversight, and the organisational changes needed to embed AI safely into investigative work. 

A recurring theme was the importance of keeping humans at the centre, with AI designed to augment professional judgement rather than replace it, particularly in environments where decisions can carry significant realworld consequences. 

A problem-first perspective from government

A guest session from Rob Malcomson MBE, Deputy Director of Data Analytics and AI at the Public Sector Fraud Authority, reinforced the importance of starting with the problem, not the technology. 

Rob spoke to the wider government strategy for tackling fraud through connected, federated data rather than siloed systems, highlighting how fraud often exploits the gaps between organisations. He also explored the unique realities of applying AI in counterfraud settings, where deception must be assumed, evidential standards are high, and opaque or “black box” models are incompatible with due process. His reflections on human oversight, and the legal and ethical necessity of keeping people accountable for decisions, strongly resonated with attendees. 

How Clue is approaching AI

We also shared how Clue is developing AI in this context. Our focus is on applied, assistive AI embedded within established intelligence and investigation workflows, supporting tasks such as triage, linking, monitoring and summarisation to reduce cognitive load and help insight surface earlier. 

AI in Clue operates within existing case structures, evidential standards and audit trails, ensuring that decisions, judgement and accountability remain firmly with people. Capabilities are transparent, optional and configurable, giving organisations control over how and when AI is applied, and allowing adoption at a pace aligned to their own governance and risk appetite. 

Learning from real-world experience

Customer contributions played a central role in grounding the discussions in operational reality. Hearing directly from teams already navigating assurance, policy alignment and organisational change helped surface both progress and friction points. Participants were open about what has delivered value, what has required iteration, and where new risks need to be actively managed as AI capabilities mature. 

The roundtable sessions created space for candid, peertopeer discussion on AI readiness and guardrails. Organisations compared approaches to governance, shared lessons on data preparedness, and discussed how to balance innovation with scrutiny. A consistent takeaway was that AI readiness is not a oneoff milestone, but an ongoing organisational discipline that extends beyond technology alone.

Reflecting on the discussions, our Chief Customer Officer, Antonia James highlighted the emphasis on trust, culture and shared learning: 

“Trust emerged as a defining theme throughout the discussion – something that takes time to build and can be lost very quickly, particularly when AI is introduced into highstakes decisionmaking.  

“There was a clear consensus that trusted AI adoption is a shared responsibility: led from the top, grounded in strong governance and data foundations, and built around people rather than technology alone. What stood out most was the openness in the room, with organisations candidly acknowledging common challenges, learning from those further along their AI journey, and choosing collaboration over reinventing the wheel.” 

Looking beyond AI

Later sessions broadened the focus beyond AI, touching on how teams are measuring impact, improving coordination and getting more value from existing investigative workflows. These conversations reinforced that AI sits alongside wider efforts to strengthen consistency, resilience and decisionmaking, not as a replacement for them. 

The event also underlined the value of bringing the Clue community together in smaller, regional settings. These forums enable deeper discussion, more open exchange and shared learning across organisations facing similar pressures, particularly for teams earlier in their journey who benefit from learning directly from peers. 

“Following feedback from our annual conference, it was clear there was strong appetite for more regional events that allow for detailed, interactive discussion,” said our Chief Strategy Officer, Thomas Drohan 

“Manchester showed the value of that approach – spending time with customers talking candidly about AI capabilities, organisational readiness and the realities of adoption. It’s always a privilege to be in the room with people doing such important work, and to better understand how we can support them.” 

Thank you to everyone who contributed so openly and thoughtfully. The insights shared in Manchester will continue to shape our work as we support the safe, effective and accountable use of AI in intelligence and investigations.

Related Resources

Book a demo

Book a demo

Find out how Clue can help your organisation.