AI
aiseca/controls-catalog
Catalogue of AI security controls across all three AISECA maturity tiers
AISECA Controls Catalog
Detailed catalogue of AI security controls across all three AISECA maturity tiers. Each control includes a description, implementation guidance, and mapping to NIST GenAI Risk Domains.
Status: Draft | Framework Version: 0.1
Tier 01: Define & Constrain (Foundational Controls)
AISECA-01-001: AI Asset Inventory
- Description: Catalogue all AI tools, models, and integrations across the organisation
- Implementation: Maintain a living register of all AI systems including vendor, data flows, users, and risk classification
- NIST Mapping: GOVERN 1.1, MAP 1.1
AISECA-01-002: Acceptable Use Policy
- Description: Establish clear guidelines for how AI tools may be used
- Implementation: Define permitted and prohibited uses, data handling requirements, and escalation procedures
- NIST Mapping: GOVERN 1.3, GOVERN 2.1
AISECA-01-003: Access Control Baseline
- Description: Role-based access to AI systems and their training data
- Implementation: Implement RBAC with least-privilege principles for all AI system access
- NIST Mapping: GOVERN 1.4, MANAGE 2.1
AISECA-01-004: Initial Risk Assessment
- Description: Map AI deployments to NIST GenAI Risk Domains
- Implementation: Assess each AI deployment against risk domains, document findings, assign risk owners
- NIST Mapping: MAP 1.1, MAP 2.1
AISECA-01-005: Data Classification
- Description: Classify data flowing into and out of AI systems
- Implementation: Apply organisational data classification standards to all AI data flows
- NIST Mapping: MAP 3.1, MANAGE 1.1
AISECA-01-006: Vendor Evaluation Criteria
- Description: Security evaluation standards for AI vendor selection
- Implementation: Define minimum security requirements for AI vendors including data handling, model transparency, and incident response
- NIST Mapping: GOVERN 5.1, MAP 5.1
Tier 02: Enforce & Monitor (Operational Controls)
AISECA-02-001: Prompt Injection Defence
- Description: Input validation and sanitisation for all AI-facing interfaces
- Implementation: Deploy input filtering, context isolation, and injection detection mechanisms
- NIST Mapping: MANAGE 2.2, MANAGE 4.1
AISECA-02-002: Data Leakage Prevention
- Description: Monitor and prevent sensitive data exfiltration through AI
- Implementation: Implement DLP controls on AI inputs and outputs, including PII detection and blocking
- NIST Mapping: MANAGE 2.2, MANAGE 3.1
AISECA-02-003: Output Monitoring
- Description: Real-time scanning of AI outputs for policy violations
- Implementation: Automated scanning of AI-generated content against organisational policies
- NIST Mapping: MEASURE 2.1, MANAGE 4.1
AISECA-02-004: Automated Compliance Checks
- Description: Continuous verification of AI systems against defined policies
- Implementation: Automated policy enforcement with alerting and reporting
- NIST Mapping: GOVERN 1.5, MEASURE 3.1
AISECA-02-005: Incident Response Playbooks
- Description: AI-specific incident response procedures and escalation
- Implementation: Documented playbooks for AI-specific incidents including model compromise, data leakage, and adversarial attacks
- NIST Mapping: MANAGE 4.1, MANAGE 4.2
AISECA-02-006: Audit Logging
- Description: Comprehensive logging of all AI interactions for forensics
- Implementation: Immutable audit logs capturing all AI system interactions, access events, and configuration changes
- NIST Mapping: MEASURE 2.1, MANAGE 3.2
Tier 03: Validate & Adapt (Continuous Validation)
AISECA-03-001: Red Team Exercises
- Description: Adversarial testing of AI systems by internal or external teams
- Implementation: Regular red team engagements targeting AI-specific attack vectors
- NIST Mapping: MEASURE 2.2, MEASURE 3.2
AISECA-03-002: Model Behaviour Monitoring
- Description: Drift detection and behavioural anomaly identification
- Implementation: Continuous monitoring for model drift, output degradation, and unexpected behavioural changes
- NIST Mapping: MEASURE 1.1, MEASURE 2.1
AISECA-03-003: Threat Intelligence Integration
- Description: Feed AI-specific threat intel into defence posture
- Implementation: Subscribe to and operationalise AI-specific threat intelligence feeds
- NIST Mapping: MAP 5.2, MANAGE 4.1
AISECA-03-004: Feedback Loops
- Description: Continuous improvement cycles from security events and testing
- Implementation: Structured feedback mechanisms from incidents, testing, and monitoring into control refinement
- NIST Mapping: MEASURE 3.3, MANAGE 4.3
AISECA-03-005: Adaptive Control Refinement
- Description: Evolve controls based on new attack vectors and research
- Implementation: Quarterly control review cycles incorporating new threats, research, and operational experience
- NIST Mapping: GOVERN 1.5, MANAGE 4.3
AISECA-03-006: Cross-Org Benchmarking
- Description: Compare maturity and controls against peer organisations
- Implementation: Participate in AISECA benchmarking programme to measure relative maturity
- NIST Mapping: GOVERN 6.1, MEASURE 3.3
Contributing
To suggest new controls or refinements, open a pull request or issue. See CONTRIBUTING.md.
License
Released under CC BY 4.0.
AISECA -- AI Security Alliance | aiseca.org | GitHub