Solution Atlas
SpecialisedUser storyConsultative playbook

Our developers are building with Azure OpenAI faster than we can secure it

A CISO is fielding a wave of new GenAI projects. Developers are spinning up Azure OpenAI endpoints with default settings, prompts are being logged without classification, and no one has reviewed how sensitive data is reaching the models. The CISO wants a defensible baseline before the first customer-facing workload ships.

Trigger
Wave of new AI projects; no security baseline.
Good outcome
Defender for Cloud AI workload coverage, Purview classification, identity-based access, content filtering enforced.
Diagnostic discovery

Signals this story fits

Observable cues that confirm the conversation belongs here.

  • ·Multiple AI workloads spinning up without security baseline
  • ·Developers using Azure OpenAI with default content filtering
  • ·No content classification for grounding or prompt data
  • ·CISO concerned about prompt injection / data leakage
  • ·Compliance team flagging AI workloads

Questions to ask

Open-ended, SPIN-style — each one has a reason it matters.

  1. 1.How many AI workloads are currently in development across the org?

    WhySizes the scope. Most customers underestimate by a factor of 2.

    Listen for: “half a dozen” · “I think” · “every product team has one”

  2. 2.Who owns the AI security baseline today?

    Why"Nobody yet" is the common answer and the entry to the engagement.

  3. 3.What's your content classification story for training and grounding data?

    WhyAI workloads inherit content sensitivity. Classification gap == leakage risk.

  4. 4.What model evaluation framework is in place?

    WhyTests responsible-AI maturity.

  5. 5.Has Defender for Cloud been extended to AI workloads?

    WhySurfaces whether CSPM is in scope for AI surfaces.

  6. 6.Where do prompt logs go today?

    WhySensitive content in prompts is the silent leakage path.

Baseline → target architecture

TOGAF-style gap framing — what we typically see today, and what the proposed end state looks like. The gap between them is the engagement.

Baseline architecture

AI workloads scattered across product teams. Azure OpenAI deployed with default content filtering. No classification on grounding data or prompt logs. Defender for Cloud Azure-only, not extended to AI workloads. No central evaluation framework.

Typical concerns

  • ·AI workloads spinning up faster than security can govern
  • ·Default content filters miss scenario-specific risks
  • ·Prompt logs storing sensitive content
  • ·No central registry of AI workloads
  • ·Compliance team blocked from approval

Capability gaps

  • ·AI workload secure-score baseline
  • ·Content classification for grounding + prompts
  • ·Custom content filtering per scenario
  • ·Identity-bound endpoint access
  • ·Prompt-injection evaluation tooling
Target architecture

Defender for Cloud extended to AI workload coverage (Azure OpenAI, Foundry hubs, AI Search). Foundry-deployed workloads governed with custom content filters and evaluation harnesses. Purview classifies grounding data so AI workloads only see content appropriate to their tier. Entra ID P2 for managed identities and human access. Prompt logs classified and retained per policy.

Key capabilities

  • AI workload secure-score
  • Custom content filtering per scenario
  • Classified grounding content
  • Identity-bound AI endpoints
  • Prompt-injection evaluation in the build pipeline

Enabling SKUs

Resolved in the ‘Recommended cards’ section below.

Architecture decisions

Each decision is offered as explicit options with trade-offs — Hohpe's “selling options” principle. A safe default is noted where one exists.

  1. Decision 1.Foundry tenancy — central hub vs per-team hubs

    Central hub

    When it fitsStrong central AI security discipline; consistent baseline.

    Trade-offsCentral team becomes bottleneck for new workloads.

    Per-team hubs

    When it fitsDistributed AI engineering; teams own their security.

    Trade-offsBaseline drift; harder to enforce posture.

    Default recommendationCentral hub for baseline + per-team workspaces beneath it.

  2. Decision 2.Content filter strictness — default vs custom per scenario

    Default

    When it fitsInternal-only AI; risk surface low.

    Trade-offsDefault may block legitimate domain queries; not customer-facing safe.

    Custom per scenario

    When it fitsCustomer-facing or regulated workloads.

    Trade-offsTuning effort per workload.

    Default recommendationDefault for internal beta; custom before customer-facing or regulated launch.

  3. Decision 3.Defender CSPM scope — AI-only vs full estate

    AI-only

    When it fitsAlready paying for full Defender CSPM elsewhere; just adding AI coverage.

    Trade-offsDiscrete posture, not unified.

    Full estate including AI

    When it fitsNo existing paid CSPM; opportunity to consolidate.

    Trade-offsBigger CSPM commitment.

    Default recommendationFull estate including AI workload coverage.

Low-risk trial — proof of value

45-day AI security baseline for the next AI workload

6 weeks

Defender for Cloud AI workload coverage enabled. Foundry hub provisioned with custom content filters and evaluation harness. Purview classifies grounding content. Entra ID P2 enforced for managed identities. First posture assessment for one AI workload.

Success criteria

  • AI workload secure-score baseline established
  • Custom content filters operational for the trial workload
  • Grounding content classified end-to-end
  • Prompt-injection test suite passing at >95%

InvestmentDefender CSPM per-resource + Foundry consumption + Purview capacity. Estimated ~€3–6k/month for trial scope. No customer-facing decisions during trial.

Proof metrics

  • ·AI workload secure-score above 80%
  • ·Classification coverage on grounding data above 90%
  • ·Prompt-injection test pass rate above 95%
  • ·CISO approval for the next AI workload deployment

Recommended cards

The SKUs and capabilities most likely to be part of the solution, with the editorial rationale for each in the context of this story. Add the ones that fit your situation.

Back to AI security & responsible AI