Blog / Defense & Aerospace

How to Get Authority to Operate (ATO) for Defense AI Systems

February 18, 2026 18 min read Defense AI Safety

Getting AI systems approved for defense use requires more than cybersecurity compliance. Here's the complete roadmap for Authority to Operate (ATO) under the DoD AI Ethical Principles and NIST AI RMF.

Defense AI Safety Framework

18 mos

Average time to ATO for AI systems

Source: DoD CIO 2025

$500K+

Average cost of ATO preparation

Industry estimate

67%

First-time rejection rate for AI ATOs

Internal DoD data

What is Authority to Operate (ATO) for AI Systems?

Authority to Operate (ATO) is formal approval from a designated official (called an Authorizing Official or AO) stating that an AI system is safe to deploy in a DoD or federal environment.

Traditional ATO focuses on cybersecurity controls (NIST 800-53, FedRAMP, etc.).

AI-augmented ATO adds new requirements:

  • AI safety validation: Proof that the AI won't cause unintended harm
  • Bias testing: Verification that the AI doesn't discriminate
  • Adversarial robustness: Testing against attacks
  • Explainability: Ability to explain AI decisions to humans
  • NIST AI RMF compliance: Documentation of AI risk management practices
  • DoD AI Ethical Principles: Responsible, equitable, traceable, reliable, governable

🛡️ Key Difference

Cybersecurity ATO: "This system won't get hacked."
AI-augmented ATO: "This system won't get hacked and the AI won't make catastrophically wrong decisions."

DoD AI Ethical Principles (You Must Address These)

In 2020, the Department of Defense adopted 5 AI Ethical Principles. Your ATO submission must demonstrate compliance:

1. Responsible

AI development and use is exercised with appropriate levels of judgment and care, while remaining in compliance with applicable laws and regulations.

What this means for ATO:

  • Clear lines of accountability for AI decisions
  • Human oversight mechanisms (human-in-the-loop or human-on-the-loop)
  • Incident response procedures for AI failures
  • Documentation of training data sources and quality

2. Equitable

Steps are taken to avoid unintended bias and promote equitable use of AI.

What this means for ATO:

  • Demographic fairness testing (if applicable)
  • Validation that training data is representative
  • Documentation of known limitations and edge cases
  • Monitoring for emergent bias post-deployment

3. Traceable

Data sources, design procedure, test protocols, and results are documented to allow for audits.

What this means for ATO:

  • Complete model lineage (data → training → validation → deployment)
  • Version control for models, data, and code
  • Audit logs for AI decisions
  • Explainability tools (SHAP, LIME, saliency maps, etc.)

4. Reliable

AI systems have explicit, well-defined uses, and their safety, security, and robustness are tested and assured.

What this means for ATO:

  • Adversarial testing against known attacks
  • Out-of-distribution detection
  • Stress testing under operational conditions
  • Defined failure modes and fallback procedures

5. Governable

Humans maintain appropriate understanding of and control over AI systems.

What this means for ATO:

  • Model risk governance structure
  • Change management procedures
  • Operator training requirements
  • Kill switch / disablement procedures

NIST AI Risk Management Framework (AI RMF) Alignment

NIST published the AI Risk Management Framework in 2023. It's voluntary for commercial use but increasingly required for federal AI systems.

Your ATO submission should map your AI risk management controls to NIST AI RMF functions:

GOVERN

  • AI governance structure and accountability
  • Model risk management policy
  • Roles and responsibilities
  • Risk tolerance and appetite

MAP

  • AI system inventory and classification
  • Identification of stakeholders and affected parties
  • Risk mapping (technical, societal, operational)
  • Regulatory and compliance requirements

MEASURE

  • Model performance metrics (accuracy, precision, recall, F1)
  • Fairness metrics (demographic parity, equalized odds)
  • Robustness metrics (adversarial accuracy, OOD detection)
  • Explainability assessment

MANAGE

  • Risk prioritization and mitigation plans
  • Ongoing monitoring and alerting
  • Incident response procedures
  • Continuous improvement processes

The 12-Month ATO Roadmap

Months 1-3: Documentation & Assessment

Week 1-2: AI System Inventory

  • List all AI/ML components in your system
  • Classify by risk level (High/Medium/Low)
  • Identify data sources and dependencies

Week 3-6: Model Documentation

For each AI component, create:

  • Model Card: Architecture, training data, performance metrics
  • Datasheet: Data sources, collection methods, known biases
  • Risk Assessment: Failure modes, edge cases, limitations
  • Use Case Description: Intended use, operator training, human oversight

Week 7-12: Gap Analysis

  • Compare current state to DoD AI Ethical Principles
  • Identify missing controls (bias testing, adversarial testing, etc.)
  • Estimate remediation effort and cost

Months 4-6: Testing & Validation

Bias & Fairness Testing (4-6 weeks)

If your AI makes decisions about people:

  • Test predictions across demographic groups
  • Calculate disparate impact ratios
  • Document any unavoidable disparities (with justification)

Adversarial Robustness Testing (4-6 weeks)

Test your AI against attacks:

  • Evasion attacks: Can adversaries manipulate inputs to fool the model?
  • Poisoning attacks: Can attackers corrupt training data?
  • Model extraction: Can attackers steal your model?
  • Privacy attacks: Can attackers extract training data?

Stress Testing (2-4 weeks)

  • Test under operational conditions (noise, occlusion, weather, etc.)
  • Test edge cases and rare scenarios
  • Measure out-of-distribution detection capability

Months 7-9: Remediation & Hardening

Fix identified issues:

  • Bias mitigation: Retrain with fairness constraints or post-processing
  • Adversarial hardening: Adversarial training, input validation, anomaly detection
  • Monitoring deployment: Build dashboards for real-time model performance
  • Explainability tools: Integrate SHAP, LIME, or other explanation methods

Months 10-12: ATO Submission & Review

Month 10: Prepare ATO Package

Your ATO package should include:

  • System Security Plan (SSP): Standard ATO documentation
  • AI Annex to SSP: AI-specific controls and testing
  • Model Cards: For each AI component
  • Test Reports: Bias, adversarial, stress testing results
  • Risk Assessment: Residual risks and mitigation plans
  • Operational Procedures: Human oversight, incident response, monitoring
  • Training Plan: How operators will be trained on AI system

Month 11: Coordinate with Authorizing Official

  • Schedule pre-submission meeting with AO
  • Present risk assessment and testing results
  • Address any initial concerns

Month 12: Formal ATO Review

  • Submit complete ATO package
  • Respond to AO questions and requests for additional information
  • Receive ATO decision (approve, conditional approve, or deny)

Common Reasons AI ATOs Are Denied

❌ Insufficient Bias Testing

Issue: "We tested overall accuracy but not fairness across demographic groups."

Fix: Conduct demographic fairness testing even if not legally required. Document results.

❌ No Adversarial Testing

Issue: "We assume adversaries won't attack our AI."

Fix: Conduct red team testing against known adversarial attacks. Document vulnerabilities and mitigations.

❌ Incomplete Documentation

Issue: "Model was developed by contractor. We don't have training data documentation."

Fix: Require model cards and datasheets from vendors. Build documentation if it doesn't exist.

❌ Inadequate Human Oversight

Issue: "AI makes fully autonomous decisions without human review."

Fix: Implement human-in-the-loop or human-on-the-loop controls. Define escalation procedures.

❌ No Fallback Plan

Issue: "System becomes inoperable if AI fails."

Fix: Design manual fallback procedures. Test transition from AI to manual mode.

Expedited ATO: Can You Fast-Track?

Some agencies offer expedited ATO paths (6-9 months instead of 18 months):

DoD Provisional Authorization (PA)

  • Provisional approval for low/moderate impact systems
  • Faster review (6-9 months typical)
  • Requires annual re-assessment

FedRAMP Authorization

  • If your AI is a cloud service, FedRAMP may be the path
  • Leverage existing FedRAMP authorization if you're built on AWS/Azure/GCP
  • Add AI-specific controls as an overlay

COTS/GOTS Fast Path

  • Commercial-off-the-shelf (COTS) AI systems may qualify for streamlined review
  • Requires vendor to provide pre-validated AI safety documentation
  • Still requires customer to validate for their specific use case

Cost Breakdown

Typical costs for AI ATO preparation:

AI ATO Budget

  • Documentation & gap analysis: $50K - $100K
  • Bias testing: $30K - $80K
  • Adversarial testing: $50K - $150K
  • Stress testing: $20K - $60K
  • Remediation (varies widely): $100K - $500K
  • Independent validation: $40K - $100K
  • Total: $290K - $990K

Case Study: Vision AI for Logistics

Client: Defense Contractor (Logistics AI)

System: Computer vision AI for automated cargo inspection

Challenge: ATO denied due to insufficient adversarial testing and lack of explainability

Our Approach:

  • Conducted red team testing (40+ adversarial scenarios)
  • Implemented saliency maps for visual explanations
  • Added human-on-the-loop for high-risk detections
  • Built monitoring dashboard for real-time performance

Result: ATO approved in 4 months (after initial denial). System now deployed across 12 bases.

Ongoing Compliance: Post-ATO Requirements

Getting ATO is not the end. You must maintain compliance:

  • Annual Assessment: Re-validation of AI safety controls
  • Continuous Monitoring: Real-time tracking of model performance
  • Incident Reporting: Notify AO within 24-72 hours of AI failures
  • Change Management: Re-assess when model is retrained or data sources change

Next Steps

If you're preparing for AI ATO:

  1. Start early: 18 months before planned deployment
  2. Budget appropriately: $300K - $1M depending on complexity
  3. Engage AO early: Don't wait until submission to get feedback
  4. Consider hiring experts: AI safety firms can accelerate the process

Preparing for AI ATO?

We've helped 15+ defense contractors obtain ATO for AI systems. Our team includes former DoD program managers, NIST AI RMF contributors, and AI safety researchers.

Schedule Confidential Consultation

Typical engagement: 6-12 months • Pricing: $100K-$500K

About BeaconShield Labs

We provide AI safety validation, red teaming, and ATO preparation for defense and aerospace contractors. Our team includes former DoD program managers, federal auditors, and AI safety researchers.