Solution Atlas
EverydayUser storyConsultative playbook

EU AI Act lands in 18 months and we have 30 AI projects with no governance

The compliance team has flagged that the EU AI Act will require model attestation, risk classification, and lineage for any AI workload touching EU citizens. Today the AI projects are scattered across product teams with no central register and no governance baseline.

Trigger
EU AI Act timeline; regulator interest in AI workload posture.
Good outcome
AI workload register live, risk classifications applied, Purview lineage for AI training and grounding data, model attestation cadence operational.
Diagnostic discovery

Signals this story fits

Observable cues that confirm the conversation belongs here.

  • ·Compliance team has named EU AI Act as a board-level risk
  • ·AI projects scattered across product teams with no central register
  • ·No risk classification applied to any AI workload
  • ·EU AI Act high-risk classifications likely apply to at least one workload
  • ·Legal cannot answer regulator questions about model attestation or lineage

Questions to ask

Open-ended, SPIN-style — each one has a reason it matters.

  1. 1.How many AI workloads do you have across product teams, and which involve EU citizens or EU customers?

    WhySizes the register and the EU AI Act scope.

    Listen for: “we think 20+ workloads” · “most touch EU customers” · “we have not counted”

  2. 2.Which workloads might fall into the EU AI Act high-risk categories — credit, employment, education, essential services?

    WhyIdentifies the workloads needing the most rigorous controls. The cost of getting classification wrong is the implication to anchor on.

  3. 3.What model attestation do you produce today — model cards, evaluation reports, bias testing, none?

    WhySurfaces the artefact gap. EU AI Act expects systematic attestation, not ad-hoc.

  4. 4.How is training data classified and lineage tracked from source to deployed model?

    WhyDrives the Purview integration. Without lineage, model attestation is unanchored.

  5. 5.Who owns AI risk today — Compliance, Legal, AI engineering, or no one yet?

    WhyCritical ownership question. EU AI Act compliance needs a named accountable owner.

  6. 6.What is the regulator engagement timeline — known dates, expected enforcement, sector specifics?

    WhyShapes the urgency and the artefacts. Some sectors (financial services, healthcare) will see earlier enforcement.

Baseline → target architecture

TOGAF-style gap framing — what we typically see today, and what the proposed end state looks like. The gap between them is the engagement.

Baseline architecture

AI projects scattered across product teams with no central register. No risk classification per workload. Training data classification absent. Model attestation produced ad-hoc when teams remember. Lineage from training data to deployed model invisible. Compliance is reactive — produces artefacts when regulators ask. No mapping from workload to EU AI Act tier.

Typical concerns

  • ·No defensible answer to "what AI workloads do we have?"
  • ·EU AI Act risk classification not applied
  • ·Model attestation absent or ad-hoc
  • ·Lineage from training data to model invisible
  • ·No named owner of AI compliance

Capability gaps

  • ·Central AI workload register
  • ·EU AI Act tier classification per workload
  • ·Training data classification and lineage (Purview)
  • ·Model attestation cadence with reusable templates
  • ·AI workload posture in Defender for Cloud
Target architecture

Central AI workload register maintained by Compliance with input from AI engineering. Every workload classified against EU AI Act tiers (prohibited / high-risk / limited-risk / minimal-risk). Purview classifies AI training and grounding data with lineage end-to-end. Model attestation produced systematically — model card, evaluation report, bias testing — with reusable templates. Defender for Cloud AI workload coverage provides continuous posture evidence. Quarterly compliance review with Compliance + AI engineering + Legal.

Key capabilities

  • AI workload register with EU AI Act tier
  • Purview classification and lineage on training and grounding data
  • Systematic model attestation
  • Continuous AI workload posture (Defender for Cloud)
  • Compliance + engineering + legal cadence

Enabling SKUs

Resolved in the ‘Recommended cards’ section below.

Architecture decisions

Each decision is offered as explicit options with trade-offs — Hohpe's “selling options” principle. A safe default is noted where one exists.

  1. Decision 1.Risk classification framework — EU AI Act direct vs NIST AI RMF as underlying + EU AI Act overlay vs ISO 42001 for certification

    EU AI Act direct

    When it fitsEU exposure is the driver; regulator wants EU AI Act mapping; sector-specific guidance available.

    Trade-offsEU-centric; less portable to other jurisdictions.

    NIST AI RMF + EU AI Act overlay

    When it fitsMulti-jurisdictional; NIST provides the comprehensive risk substrate, EU AI Act adds the regulatory specifics.

    Trade-offsTwo-framework operational overhead.

    ISO 42001 for certification

    When it fitsCertification posture matters commercially; ISO 42001 audit appetite.

    Trade-offsAudit cost; newer framework with less tooling support.

    Default recommendationNIST AI RMF as the underlying framework with EU AI Act overlay. ISO 42001 added once the baseline is stable.

  2. Decision 2.Workload register location — Compliance Manager vs Purview Insights vs Foundry hub vs custom

    Compliance Manager

    When it fitsExisting Compliance Manager investment; control-mapping needed alongside the register.

    Trade-offsLess native AI metadata.

    Purview Insights

    When it fitsData governance already on Purview; lineage-driven register.

    Trade-offsInsights tier carries consumption cost.

    Azure AI Foundry hub

    When it fitsAI engineering builds in Foundry; register can live where the workloads do.

    Trade-offsCompliance team may need a separate read-only view.

    Custom register (ServiceNow, GRC tool)

    When it fitsEnterprise GRC tooling already established.

    Trade-offsIntegration to Purview lineage and Defender for Cloud needed.

    Default recommendationCompliance Manager for the compliance-facing register; Foundry hub for the engineering-facing register. Linked via shared workload ID.

  3. Decision 3.Attestation cadence — per release vs quarterly vs annual

    Per release

    When it fitsHigh-risk workloads with rapid iteration; attestation tied to deployment.

    Trade-offsOperational overhead; release velocity impact.

    Quarterly

    When it fitsMost workloads; balances rigour with engineering throughput.

    Trade-offsMay not catch issues introduced between cadences.

    Annual

    When it fitsLow-risk workloads; minimal change rate.

    Trade-offsLikely insufficient for high-risk EU AI Act workloads.

    Default recommendationPer release for high-risk workloads; quarterly for limited-risk; annual for minimal-risk. Cadence driven by tier.

Low-risk trial — proof of value

90-day EU AI Act baseline + first 15 workloads

12 weeks

AI workload discovery across product teams. First 15 workloads registered with EU AI Act tier classification. Purview lineage live for AI training and grounding data on at least 5 registered workloads. Model attestation produced for two workloads (one high-risk, one limited-risk) as reusable templates. Defender for Cloud AI workload coverage enabled. Quarterly cadence designed with named owners across Compliance + AI engineering + Legal. Regulator-engagement artefact pack drafted.

Success criteria

  • 15 AI workloads registered with EU AI Act tier classification
  • Purview lineage live for 5+ workloads end-to-end
  • Two model attestations produced as reusable templates
  • Quarterly compliance cadence operational with named owners

InvestmentPurview consumption + Foundry hub + Defender CSPM. Estimated ~€6–12k/month for the trial scope. Existing AI workloads continue operating during the register and classification work.

Proof metrics

  • ·AI workload register coverage above 80% of known workloads
  • ·EU AI Act tier classification applied for 100% of registered workloads
  • ·Model attestation templates accepted by Legal as the standard
  • ·Audit-defensible AI governance posture for regulator engagement

Recommended cards

The SKUs and capabilities most likely to be part of the solution, with the editorial rationale for each in the context of this story. Add the ones that fit your situation.

Why for this story

Sensitivity labels classify AI training and grounding data — the foundation auditors expect. The EU AI Act high-risk tier requires lineage from training data to deployed model; Purview is what produces it.

Why for this story

The engineering-facing hub for AI workloads with native evaluation, content filtering, and model card generation. Produces the attestation artefacts the EU AI Act requires — at scale, not ad-hoc.

Why for this story

AI workload coverage in Defender for Cloud produces the continuous posture evidence the regulator engages with. Shifts the compliance narrative from "annual artefact" to "live signal".

Back to Responsible AI & AI governance