We’re pleased to share the outcomes of our inaugural FAIR Roundtable, held in London this April, with the publication of the FAIR London Report — now available to read and download.
The event brought together a diverse group of professionals from financial services, blockchain, law, and regulatory backgrounds for a focused, closed-door conversation on a topic many are grappling with right now:
How can we responsibly integrate AI into crypto compliance?
As a company deeply involved in blockchain intelligence, we hear this question often. There’s strong interest in what AI can do—from automating onboarding processes to more effectively detecting fraud—but also real uncertainty about how far it should go, what regulators expect, and how to balance innovation with oversight.
That’s exactly why we created FAIR — short for Finance, AI, and Regulation. The goal is simple: create space for open, practical discussions on AI’s real impact in compliance, free from hype and with input from across the industry.
Introducing FAIR: A Space for Practical Dialogue
The FAIR Roundtable is designed as a collaborative forum—not a stage. It brings together a mix of perspectives to examine real challenges in using AI within regulatory and compliance frameworks. Rather than offering product pitches or keynote speeches, it focuses on shared experience and pragmatic problem-solving.
Key Insights from the London Roundtable
The FAIR London Report captures some of the key challenges and opportunities raised during the session. Not speculation—practical experience from teams actively navigating this space.
- AI is already in use — just not always openly.
Firms described how they’re using AI to parse bank statements, generate draft SARs, and identify transaction anomalies. But for many, these tools remain in the background, largely due to regulatory uncertainty.
- Regulatory uncertainty is a bottleneck.
Participants spoke candidly about the risks of disclosing AI usage to regulators. One participant summed it up: “The regulator wants evidence, not insight.”
- Governance remains essential.
AI is largely being used in support roles, with humans retaining decision-making authority. As one attendee put it: “AI should raise the hand, not press the button.”
- Compliance teams are evolving.
We heard examples of firms embedding data scientists, prompt engineers, and model validators into compliance structures—expanding the function rather than reducing it.
The report also includes a “Compliance Team 2030” skills map, and a checklist of five “sanity checks” to consider when implementing AI in compliance environments.
Read the FAIR London Report.
Join the next FAIR Roundtable in Amsterdam
The next roundtable in the series will take place in Amsterdam on 2 June 2025. Like London, this will be a closed-door, peer-led discussion designed to dig deeper into pressing topics including:
- Model validation and explainability
- Cross-border regulatory expectations
- Practical approaches to KYC/AML automation
- Risk governance in the age of AI
Request an invite to FAIR Amsterdam.
If you’re working at the intersection of compliance, regulation, and AI — or advising those who are — we’d love to hear from you.
Read the Report, Share Your Voice
We’re grateful to everyone who contributed in London, including professionals from HSBC, EY, Deutsche Bank, Bitpanda, Grant Thornton, Sumsub, and others.
We hope the FAIR London Report offers useful perspective, and helps move the conversation forward in a way that’s grounded, open, and collaborative.