MONITOR & CONTROL
What is actually happening?
Runtime behavioral monitoring, anomaly detection, audit trails, kill switches, and incident response for AI agent systems. Static assessments do not work for systems whose behavior is emergent and context-dependent. The Microsoft Copilot lesson: configuration is not enforcement.
Assessment Controls (13)
Every AI initiative that passes through this pillar must satisfy these controls. The maturity model measures how consistently the organization enforces them.
Agent Action Logging
Are agent actions being logged beyond standard API calls?
Trust Boundary Violation Detection
Can the organization detect when an agent violates its trust boundaries?
Agent Kill Switch Capability
Are there kill switches that can halt agent actions immediately?
AI Policy Compliance Testing
Has the organization tested its AI tools against its own policies?
AI-Specific Incident Response
Does the incident response playbook include AI-specific incident categories?
AI Incident Ownership
Who owns agent incidents — IT, security, operations, or the business unit?
Behavioral Drift Detection
Is there anomaly detection for behavioral drift (agents doing things they didn't used to do)?
Audit Trail Completeness
Are audit trails sufficient for compliance and forensics?
Agent Action Reconstruction
Can the organization reconstruct what an agent did, why, and what data it accessed after an incident?
Vendor AI Pipeline Monitoring
Is there monitoring for vendor-hosted AI inference failures?
Enablement Delivery Tracking
Is there verification that required training, communication, and change management were delivered before and during rollout?
AI Red Team Cadence
When was the last time production AI systems were red-teamed? Is there a quarterly cadence?
Independent AI Validation Function
Is there an independent validation function for high-impact AI systems (separate from the build team)?