AI Compliance Checklist: GDPR, HIPAA, SOC 2, EU AI Act
Auditors are asking tough questions about your AI. Here's your compliance checklist.
Why AI Compliance is Different
Traditional software compliance isn't enough. AI introduces new risks:
- Non-determinism: Different outputs for same input
- Bias: Unintentional discrimination
- Data exposure: Training data memorization, RAG leaks
- Explainability: "Black box" decision-making
- Drift: Performance degrades over time
Universal Requirements (All Regulations)
1. Documentation
✓ You must document:
- □ AI use cases and risk levels
- □ Model architecture and training data sources
- □ Testing methodology and results
- □ Monitoring and alerting processes
- □ Incident response procedures
- □ Data retention and deletion policies
2. Testing & Validation
✓ Required tests:
- □ Accuracy testing (200+ test cases minimum)
- □ Bias and fairness testing
- □ Security testing (prompt injection, data leaks)
- □ Performance testing (latency, uptime)
- □ Edge case testing
3. Monitoring
✓ Continuous monitoring:
- □ Accuracy metrics tracked over time
- □ Bias metrics for protected groups
- □ Error rates and incident logs
- □ User feedback collection
- □ Automated alerts for anomalies
GDPR Compliance (EU)
Key Requirements
- Article 22: Right to not be subject to automated decision-making
- Article 13/14: Right to be informed (AI usage disclosure)
- Article 15: Right to access (explain AI decisions)
- Article 17: Right to erasure (delete training data)
GDPR Checklist for AI
- □ Data minimization: Only use necessary PII
- □ Consent tracking: Record user consent for AI processing
- □ Data deletion: Can you delete a user's data from training/RAG?
- □ Explainability: Can you explain AI decisions to users?
- □ Human review: High-impact decisions reviewed by humans
- □ DPA notification: Report data breaches within 72 hours
Common GDPR Pitfalls
- ❌ Storing PII in RAG without consent
- ❌ No process to delete user data from embeddings
- ❌ Can't explain why AI made a decision
- ❌ No human review for loan/hiring decisions
HIPAA Compliance (US Healthcare)
Key Requirements
- Privacy Rule: Protect PHI (Protected Health Information)
- Security Rule: Safeguards for ePHI
- Breach Notification: Report breaches
HIPAA Checklist for AI
- □ BAA signed: Business Associate Agreement with LLM provider
- □ PHI detection: Scan outputs for accidental PHI exposure
- □ Access controls: Role-based access to AI systems
- □ Audit logs: Track who accessed what data
- □ Encryption: PHI encrypted in transit and at rest
- □ De-identification: Remove PHI before training/analysis
- □ Incident response: Plan for PHI breaches
Common HIPAA Pitfalls
- ❌ Using OpenAI without BAA (regular API isn't HIPAA-compliant)
- ❌ RAG system exposing other patients' data
- ❌ No PHI detection in outputs
- ❌ Training models on un-de-identified data
SOC 2 (Type II)
Trust Service Criteria
- Security: Protection against unauthorized access
- Availability: System uptime
- Processing Integrity: Accurate, complete processing
- Confidentiality: Protect confidential info
- Privacy: Handle PII appropriately
SOC 2 Checklist for AI
- □ Change management: Track model updates
- □ Testing documentation: Pre-deployment test results
- □ Monitoring: Accuracy, latency, error rates
- □ Incident response: Documented procedures
- □ Vendor management: LLM provider SOC 2 reports
- □ Access controls: Who can deploy models?
- □ Data retention: Training data lifecycle
What Auditors Will Ask
- "Show me your AI testing documentation."
- "How do you monitor AI accuracy in production?"
- "What happens if your AI fails?"
- "How do you prevent data leakage?"
- "Can you roll back a bad model deployment?"
EU AI Act (Coming 2024-2026)
Risk Categories
- Unacceptable: Social scoring, subliminal manipulation (banned)
- High-risk: HR, credit scoring, healthcare (strict requirements)
- Limited risk: Chatbots (transparency required)
- Minimal risk: Most other AI (few requirements)
High-Risk AI Requirements
- □ Risk management: Identify and mitigate risks
- □ Data governance: Quality training data
- □ Documentation: Technical documentation + user manual
- □ Transparency: Disclose AI usage to users
- □ Human oversight: Ability to override AI
- □ Accuracy: Appropriate performance levels
- □ Cybersecurity: Resilience against attacks
- □ Conformity assessment: Third-party certification
Industry-Specific Requirements
Financial Services (US: FCRA, ECOA)
- □ Adverse action notices (explain AI credit decisions)
- □ Fair lending testing (no discrimination)
- □ Model validation (SR 11-7 guidance)
Government (US: Executive Order 14110)
- □ Safety testing for high-risk AI
- □ Red team exercises
- □ Sharing safety test results with NIST
Your Compliance Roadmap
Month 1: Documentation
- Document all AI use cases and risk levels
- Create technical documentation
- Draft incident response plan
Month 2: Testing
- Build test suite (200+ tests)
- Conduct bias and fairness testing
- Red team for security vulnerabilities
Month 3: Monitoring
- Set up production monitoring
- Configure alerts
- Implement feedback loops
Month 4: Audit Prep
- Compile evidence package
- Review with legal/compliance team
- Schedule internal audit
Quick Compliance Assessment
Answer these 5 questions:
- Can you document how your AI was tested?
- Yes → ✅
- No → 🚩 Major gap
- Can you explain an AI decision to a user/auditor?
- Yes → ✅
- No → 🚩 Major gap
- Do you monitor AI accuracy in production?
- Yes → ✅
- No → 🚩 Major gap
- Have you tested for bias and fairness?
- Yes → ✅
- No → 🚩 Major gap
- Do you have an AI incident response plan?
- Yes → ✅
- No → 🚩 Major gap
Results:
- 5/5: You're in good shape
- 3-4/5: Some gaps to address
- 0-2/5: High audit risk—prioritize compliance
Conclusion
AI compliance isn't optional—it's a business requirement. Start with documentation and testing, then build monitoring.
Resources:
Need Help with AI Compliance?
We'll audit your AI systems and prepare you for regulatory audits.
Book Compliance Audit