Thought Leadership | August 12, 2025

Preparing for AI and compliance in the EU

by Marina Khaustova

Preparing for AI and compliance in the EU: key takeaways from the FAIR Roundtable in Riga

Crypto firms across Europe are facing a new regulatory reality. With the final EU AI Act now published and the countdown to enforcement underway, compliance teams must move fast.

Earlier this summer, Crystal Intelligence hosted a FAIR (Finance, AI & Regulation) Roundtable in Riga, bringing together crypto exchanges, compliance officers, and EU regulators for an off-the-record discussion. The goal: to clarify how the new rules will affect AI adoption in compliance, and what firms must do before the first enforcement letters go out.

Here’s what we learned from the room, and why the next 18 months will be critical.

The EU AI Act changes everything for crypto compliance

The EU’s AI Act, finalized in July 2025, reclassifies many machine-learning (ML) systems used in compliance as “high-risk” infrastructure. This includes tools that support onboarding, transaction monitoring, or sanctions screening.

That label isn’t just symbolic. It comes with hard obligations for explainability, human oversight, model governance, and detailed documentation.

Crypto firms now have until mid-2026 to ensure they comply, though many in the industry are still figuring out what that means in practice.

In Riga, six of eight exchanges said they plan to fully deploy live ML systems before December 2025. Their reason? Manual reviews can’t scale to meet the 48% year-on-year growth in alerts.

But this progress comes with risk: unless the right safeguards are in place, firms could find themselves out of step with regulators just as inspections begin.

Human override isn’t optional—it’s required

One of the clearest outcomes of the roundtable was the need for human override mechanisms in any system using AI for compliance decision-making.

Every firm agreed: no machine should freeze funds or close an account without a human in the loop. As one regulator put it, “If there’s no clear stop condition, it’s not compliant.”

The key takeaway is that every ML model in use should be mapped, documented, and have a manual override function that can be triggered without rewriting code. This must be tested, recorded, and auditable.

Data-sharing blind spots are enabling crime

A standout case discussed in Riga involved a ransomware payout that was laundered across three Baltic exchanges within 90 minutes. Each firm only saw part of the flow. Individually, the activity looked low risk. Combined, it would have triggered red flags immediately.

The problem: current privacy rules and commercial concerns prevent real-time data sharing. Firms are flying blind.

The proposed solution is promising: test a federated learning system where encrypted pattern data is shared, without ever exposing customer information. But before that happens, firms must prove (and document) that these systems can’t leak sensitive data.

Most firms lack model governance—regulators won’t accept that

Only two firms at the roundtable had formal, documented model governance policies aligned with Articles 9–15 of the AI Act.

That’s a problem. Regulators expect firms to build Model Governance Committees, perform regular validations, and maintain internal registries tracking version history, changes, and test results. Vendor-led updates aren’t enough.

With enforcement starting in early 2026, firms that don’t act now risk penalties—or worse, delays in licensing or renewals.

Compliance agents show promise, but must be controlled

Some firms are now testing “agent” systems – automated tools that coordinate multiple ML models to handle compliance workflows.

The results are promising: analysts reported workload reductions of up to 65%. But they also exposed risk. One trial showed an agent recommending account closures autonomously without approval.

To stay compliant, agent systems must be bound by strict rules, logs, and human checkpoints. No AI should ever act beyond its scope.

Talent is the make-or-break issue

While AI tools are advancing quickly, the biggest constraint isn’t technology—it’s people.

Most exchanges at the roundtable employ fewer than two ML specialists focused on compliance. That’s not enough to build, test, and govern these systems at scale.

Hiring is hard, and costs are rising—up 27% year-on-year for qualified AI compliance specialists. Some firms are responding creatively: pairing contractors with in-house experts, rotating teams, and sponsoring short university courses to build in-house capabilities.

The message is clear: technology won’t save you unless you have the team to run it.

What crypto firms must do now

If your firm is operating in the EU or serves EU customers, the AI Act applies to you. Here’s what to prioritize in the next six months:

  • Map and document all AI/ML systems, and add a manual override
  • Set up a Model Governance Committee and begin internal validations
  • Test stop conditions and explainability in every model workflow
  • Explore federated learning to improve risk visibility without breaching privacy rules
  • Invest in upskilling your team, AI literacy is no longer optional

Final thoughts

As AI becomes more embedded in the compliance stack, trust from regulators, customers, and internal teams becomes the defining metric.

The Riga roundtable showed that crypto firms are not just aware of the challenge—they’re acting. But the clock is ticking.

Crystal’s FAIR Series will continue in Dubai later this year. We invite all firms navigating this shift to join the conversation—and prepare together for what’s next.

Want the full insights from Riga? Download the FAIR Roundtable Report

Be the first to get news from Crystal