All Pillars
4

MONITOR & CONTROL

What is actually happening?

Runtime behavioral monitoring, anomaly detection, audit trails, kill switches, and incident response for AI agent systems. Static assessments do not work for systems whose behavior is emergent and context-dependent. The Microsoft Copilot lesson: configuration is not enforcement.

13 controlsExecutive Sponsor: COO

Assessment Controls (13)

Every AI initiative that passes through this pillar must satisfy these controls. The maturity model measures how consistently the organization enforces them.

1

Agent Action Logging

Are agent actions being logged beyond standard API calls?

2

Trust Boundary Violation Detection

Can the organization detect when an agent violates its trust boundaries?

3

Agent Kill Switch Capability

Are there kill switches that can halt agent actions immediately?

4

AI Policy Compliance Testing

Has the organization tested its AI tools against its own policies?

5

AI-Specific Incident Response

Does the incident response playbook include AI-specific incident categories?

6

AI Incident Ownership

Who owns agent incidents — IT, security, operations, or the business unit?

7

Behavioral Drift Detection

Is there anomaly detection for behavioral drift (agents doing things they didn't used to do)?

8

Audit Trail Completeness

Are audit trails sufficient for compliance and forensics?

9

Agent Action Reconstruction

Can the organization reconstruct what an agent did, why, and what data it accessed after an incident?

10

Vendor AI Pipeline Monitoring

Is there monitoring for vendor-hosted AI inference failures?

11

Enablement Delivery Tracking

Is there verification that required training, communication, and change management were delivered before and during rollout?

12

AI Red Team Cadence

When was the last time production AI systems were red-teamed? Is there a quarterly cadence?

13

Independent AI Validation Function

Is there an independent validation function for high-impact AI systems (separate from the build team)?

Governance Tracks

Employee Use [EU]: Can you detect what employees are putting into AI tools? Are DLP policies tested against AI interfaces?
Internal Build [IB]: Are internally-built agents logging actions? Is there anomaly detection for behavioral drift?
Vendor Platform [VP]: Does the vendor provide sufficient logging? Can you detect when vendor AI violates trust boundaries?