At a White House meeting on Tuesday, Pentagon technology chief Emil Michael reportedly told tech executives that the U.S. military wants AI models accessible across all security levels. Classified networks are used for highly sensitive operations such as mission planning and weapons targeting.

OpenAI this week finalized an agreement for the open network genai.mil, which serves more than three million Department of Defense personnel. While many standard usage restrictions were lifted under that arrangement, certain safeguards remain in place. Google and xAI have reached similar agreements. Expanding access to classified networks, however, would require a separate agreement, OpenAI said.

Tensions Over Safeguards

Negotiations with Anthropic appear more complex. Although its AI assistant Claude is already available on classified systems via third-party providers, Anthropic has declined to allow its technology to be used for autonomous weapons control or domestic surveillance. At the same time, the company has stated that it aims to help the U.S. maintain its AI leadership.

AI researchers continue to warn about unresolved risks. Large language models can still hallucinate, and errors in sensitive military environments could have severe consequences. AI companies attempt to mitigate these risks through embedded safeguards and usage policies.

The Pentagon reportedly views such restrictions as unnecessary. Military officials argue that commercial AI tools should be usable without additional manufacturer-imposed constraints, provided their use complies with U.S. law.

The dispute highlights a growing divide between national security priorities and private-sector AI governance frameworks.