The Pentagon has acknowledged that the US military actively deployed Anthropic’s Claude AI model during the January 3 operation to capture Venezuelan President Nicolás Maduro.
According to defence officials, Claude was used not only in preparatory planning but also in real-time decision support within the combat domain— potentially marking the first documented instance of a large language model being employed in an active military operation.
The revelation contradicts Anthropic’s publicly stated usage policies, which explicitly prohibit the use of Claude to “facilitate violence, develop weapons or conduct surveillance.”
An Anthropic spokesperson responded, “We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise. Any use of Claude — whether in the private sector or across government — is required to comply with our Usage Policies.”
While no US personnel were killed in the raid, dozens of Venezuelan civilians and Cuban security personnel reportedly died during the operation.
The US military accessed Claude through a classified partnership with Palantir Technologies, which provides secure platforms for the Pentagon’s most sensitive operations. This arrangement allows features unavailable to ordinary users.
The disclosure comes at an embarrassing time for Anthropic, which has positioned itself as the most safety-conscious major AI Company.
CEO Dario Amodei has repeatedly warned of existential risks from unconstrained AI development.
On Monday, Mrinank Sharma, head of Anthropic’s Safeguards Research Team, resigned abruptly, issuing a cryptic statement that “the world is in peril.”
Days later, the company invested $20 million in a political advocacy group pushing for stronger AI regulation.
The incident has reignited debate over the military use of frontier AI systems and the effectiveness of voluntary corporate safeguards in classified government environments.
Also read: Maduro’s fall could topple Cuba