There has been a sharp cooling in the relationship between Silicon Valley and the Pentagon that could redefine the business model of leading artificial intelligence labs. The Trump administration, seeking full operational freedom in the use of new technologies, is putting in place strict guidelines that call into question the autonomy of companies such as Anthropic.
The General Services Administration’s decision to terminate contracts with Anthropic and label the company a ‘supply chain risk’ signals that the government will not tolerate compromises on control of AI tools. The flashpoint appeared to be the security mechanisms built into the models, which the Department of Defence believes cripple their military and civilian utility.
A key element of the new strategy is the requirement to grant the US government an irrevocable licence to use AI systems for “any lawful purpose”. The new GSA guidelines go further, striking at the very structure of algorithms. Companies seeking federal contracts cannot “intentionally encode ideological judgements” into the results generated by the systems. This is a direct blow to content filtering mechanisms, which the administration sees as a form of censorship or bias.
The current situation creates a clear division in the market. Companies that opt for the full flexibility and ‘neutrality’ required by Washington will gain privileged access to the public sector. Others, emphasising restrictive security barriers, may be pushed out of the world’s most important procurement market, drastically altering their valuations and growth prospects.
