Claude on the US blacklist. Anthropic goes to court against the Pentagon

By suing the Pentagon, Anthropic is challenging the foundations of Silicon Valley's collaboration with the military, placing the ethical “guardrails” of its artificial intelligence above the demands of the Department of Defense. This legal escalation will determine whether the creators of groundbreaking models will retain control over their ultimate purpose or be forced to bow to national security priorities.

3 Min Read
United States, USA

Anthropic, the artificial intelligence lab positioning itself as a ‘secure’ alternative to the giants, has gone on an unprecedented legal offensive against the federal government. Lawsuits filed in California and the District of Columbia seek to block the Pentagon’s decision to blacklist the company from national security. This clash is not just a dispute over arms contracts; it is a fundamental test of who ultimately controls the ‘brains’ of artificial intelligence – Silicon Valley or Washington.

The conflict escalated when Defence Secretary Pete Hegseth imposed a supply chain risk designation on Anthropic. The reason was the refusal of Dario Amodei, Anthropic’s CEO, to remove ethical ‘barriers’ restricting the use of Claude in autonomous weapons systems and for domestic surveillance. The Pentagon’s position is that the law, not private corporations, decides how the country is defended and demands full flexibility in military operations. Anthropic, on the other hand, argues that current technology is too unreliable to be entrusted with life-and-death decisions, and that forcing its use violates free speech and due process.

This battle has immediate consequences. Although Amodei reassures that the restrictions are narrow in scope, the market uncertainty is apparent. The presidential order to halt Claude’s use across government hits the company’s image as a stable partner. As Wedbush’s Dan Ives notes, the corporate sector may temporarily ‘put its pencils down’ on new Anthropic technology deployments while waiting for legal clarity.

While Anthropic is fighting in the courts, the competition is wasting no time. OpenAI was quick to declare its principles convergent with the needs of the Department of Defence, taking the lead in dealing with the government. However, solidarity with Anthropic was expressed by researchers from Google and OpenAI, warning in an amicus curiae opinion that punishing companies for caring about security would stifle innovation and silence critical debate in the industry. The outcome of this trial will determine the new architecture of the relationship between the state and AI developers, defining whether the ethics of the model can be negotiated with the government.

Share This Article