When US Secretary of Defence Pete Hegseth called the development of artificial intelligence a military arms race in January, relations between the government and Silicon Valley entered a new and turbulent phase. We are now witnessing unprecedented pressure from the US administration on key players in the AI sector, which is being met with increasing resistance from the developers of these technologies themselves.
A growing conflict has been sparked by an ultimatum issued to Anthropic. The Pentagon is reportedly threatening to use the Defence Production Act to force the company to adapt its language models to the needs of the US military. A refusal would result in the company being deemed a supply chain risk. In response to this pressure, Anthropic has made it clear that it will not make its solutions available for mass surveillance of citizens or to power weapons capable of autonomous killing without close human oversight.
The situation instantly triggered a wave of solidarity within the competing companies. A group of vetted Google and OpenAI employees have signed a joint petition entitled ‘We will not be divided’. The signatories of the document warn that the Department of Defence is attempting to use classic divisive tactics, hoping to force the tech giants to make concessions that AI security leaders have not agreed to. The initiative aims to create a united industry front. Employees are calling on their companies’ boards to maintain standards and not hand over technology to the military without proper ethical safeguards.
From a business perspective, the threat of using extraordinary national security powers against private technology entities is an entry into completely uncharted territory. As Dean Ball, former White House technology policy advisor, notes, Anthropic faces the dangerous spectre of quasi-nationalisation or exclusion from the market. This aggressive move by the administration also sends a clear and worrying message to the entire innovation ecosystem, suggesting that doing business with the government carries a huge risk of losing operational independence.
These developments will define not only the future of weapons contracts in Silicon Valley, but above all the limits of commercialisation and control of the most powerful models of artificial intelligence.

