Solution Atlas
EverydayUser storyConsultative playbook

EU AI Act compliance with multiple AI workloads and no central register

The compliance team has been told to produce an AI governance baseline within 12 months. AI workloads are scattered across product teams with no central register, risk classification, or model attestation. EU AI Act lands and the regulator will ask for the register first.

Trigger
EU AI Act timeline; compliance baseline required.
Good outcome
AI workload register live, risk classifications applied, Purview lineage for AI training data, model attestation cadence operational.
Diagnostic discovery

Signals this story fits

Observable cues that confirm the conversation belongs here.

  • ·Compliance team asked to produce an AI governance baseline within a fixed timeframe
  • ·EU AI Act timeline approaching with no defensible posture
  • ·AI workloads scattered across product teams with no central register
  • ·No risk classification applied to AI use cases
  • ·Model attestation not produced; Legal cannot answer regulator questions

Questions to ask

Open-ended, SPIN-style — each one has a reason it matters.

  1. 1.How many AI workloads exist across product teams today, and how do you know?

    WhySizes the register. Most customers underestimate by a factor of 2.

    Listen for: “I think half a dozen” · “every product team has one” · “we have not counted”

  2. 2.Who in Compliance owns AI governance today?

    WhyOften "nobody yet" — the engagement starts there.

  3. 3.What does the regulator expect specifically — register, risk classification, model cards, attestation, or all?

    WhySharpens the deliverable. Different regulators emphasise different artefacts.

  4. 4.Where is the boundary between AI governance (Compliance) and MLOps (AI engineering)?

    WhyCritical distinction. AI governance is org-level; MLOps is per-model lifecycle.

  5. 5.What integration with existing risk management — operational risk, vendor risk, model risk — exists?

    WhyAI risk fits inside existing risk frameworks; surfaces the integration path.

  6. 6.Has Legal weighed in on the EU AI Act risk classification per AI workload?

    WhyLegal classification drives the controls tier. The cost of getting this wrong is the implication to anchor on.

Baseline → target architecture

TOGAF-style gap framing — what we typically see today, and what the proposed end state looks like. The gap between them is the engagement.

Baseline architecture

AI workloads scattered across product teams with no central register. No risk classification per workload. Model attestation absent. Compliance reactive — produces governance artefacts in response to regulator questions rather than continuously. AI workload posture managed by AI engineering, not by Compliance.

Typical concerns

  • ·No defensible answer to "what AI workloads do we have?"
  • ·EU AI Act risk classification not applied
  • ·Model cards and attestation absent
  • ·Lineage from training data to deployed model invisible
  • ·Compliance has no continuous evidence of AI governance posture

Capability gaps

  • ·Central AI workload register
  • ·Risk classification per workload (EU AI Act tiers)
  • ·Purview lineage on AI training and grounding data
  • ·Model attestation cadence
  • ·Cross-functional cadence (Compliance + AI engineering + Legal)
Target architecture

Central AI workload register maintained by Compliance with input from AI engineering. Risk classification applied per workload against the EU AI Act tiers (prohibited / high-risk / limited-risk / minimal-risk). Purview classifies AI training and grounding data with lineage end-to-end. Model attestation cadence operational with quarterly review. Defender for Cloud AI workload coverage provides continuous posture evidence. Cross-functional cadence with Compliance + AI engineering + Legal.

Key capabilities

  • AI workload register
  • Risk classification per EU AI Act tiers
  • Lineage on AI training and grounding data
  • Model attestation cadence
  • Continuous AI posture evidence

Enabling SKUs

Resolved in the ‘Recommended cards’ section below.

Architecture decisions

Each decision is offered as explicit options with trade-offs — Hohpe's “selling options” principle. A safe default is noted where one exists.

  1. Decision 1.Register location — Compliance Manager vs Purview vs custom register

    Compliance Manager

    When it fitsExisting Compliance Manager investment; control-mapping needed alongside the register.

    Trade-offsLess granular per-model metadata than purpose-built tools.

    Purview Insights

    When it fitsData governance estate already on Purview; lineage-led register makes sense.

    Trade-offsInsights tier carries consumption cost.

    Custom register (spreadsheet, ServiceNow, custom app)

    When it fitsSpecific workflow requirements; mature GRC tooling already in place.

    Trade-offsIntegration work; doesn't inherit Purview lineage automatically.

    Default recommendationCompliance Manager for the first 12 months; integrate with Purview lineage. Custom only where GRC tooling is mature and central.

  2. Decision 2.Risk classification framework — EU AI Act direct vs NIST AI RMF vs ISO 42001

    EU AI Act direct

    When it fitsEU exposure is the driver; EU AI Act tiers must be applied.

    Trade-offsEU-centric; less useful for non-EU operations.

    NIST AI RMF

    When it fitsUS-centric or multi-jurisdictional; comprehensive risk framework.

    Trade-offsLess directly mapped to EU regulator expectations.

    ISO 42001

    When it fitsCertification-oriented org; ISO-based GRC posture.

    Trade-offsAudit cost; newer framework, less tooling support.

    Default recommendationEU AI Act direct if EU exposure exists; NIST AI RMF as the underlying framework; ISO 42001 for certification on top.

  3. Decision 3.Ownership model — Compliance-led vs AI-engineering-led vs joint

    Compliance-led

    When it fitsStrong Compliance function; AI engineering operates within governance guardrails.

    Trade-offsRisk of Compliance becoming a bottleneck.

    AI-engineering-led with Compliance sign-off

    When it fitsMature AI engineering org; Compliance lighter-touch.

    Trade-offsRisk of governance posture lagging the AI estate.

    Joint with explicit hand-off

    When it fitsBoth functions mature; explicit RACI defined.

    Trade-offsCoordination overhead.

    Default recommendationJoint with explicit hand-off — Compliance owns the register and risk classification; AI engineering owns the lifecycle and attestation artefacts.

Low-risk trial — proof of value

60-day AI governance baseline + register first 10 workloads

8 weeks

AI workload discovery across product teams. First 10 workloads registered with risk classification per EU AI Act tier. Purview lineage live for AI training and grounding data on the registered workloads. Model attestation produced for one workload as the template. Defender for Cloud AI workload coverage enabled. Cross-functional cadence established with Compliance + AI engineering + Legal.

Success criteria

  • 10 AI workloads in the register with risk classification applied
  • Purview lineage live for at least three workloads end-to-end
  • One model attestation produced as the reusable template
  • Cross-functional cadence operating with named participants

InvestmentPurview consumption + Foundry posture + Defender CSPM. Estimated ~€4–8k/month for the trial scope. Existing AI workloads continue operating during register.

Proof metrics

  • ·AI workload register coverage above 80% of known workloads
  • ·Risk classification applied per EU AI Act tier
  • ·Lineage end-to-end demonstrable for top-tier workloads
  • ·Audit-defensible AI governance posture for regulator engagement

Recommended cards

The SKUs and capabilities most likely to be part of the solution, with the editorial rationale for each in the context of this story. Add the ones that fit your situation.

Back to AI governance