The competition for dominance in the security AI sector is gaining momentum as OpenAI introduces the GPT-5.4-Cyber model in direct response to the successes of rival project Anthropic. The new variant of the flagship model prioritises greater operational freedom for researchers, which is crucial in the race to patch vulnerabilities in critical infrastructure.
Tuesday’s release of GPT-5.4-Cyber is more than just another iteration of a flagship model. It is a strategic shift in the boundaries of what AI developers allow their users to do. While Anthropic is betting on a rigorously controlled initiative for a select few, OpenAI is opting for a ‘more permissive’ model. In practice, this means loosening the security corset that has so far often prevented researchers from fully analysing malicious code or simulating attacks for fear of violating the security policies of the platform itself.
The key to OpenAI’s strategy, however, is not just the technology, but the ecosystem. The company is dramatically scaling the Trusted Access for Cyber (TAC) programme, opening it up to thousands of individual experts and hundreds of teams looking after critical infrastructure. The introduction of multi-level verification is a pragmatic solution to the ‘dual use’ problem of artificial intelligence. Higher levels of trust unlock the more powerful features of GPT-5.4-Cyber, giving defenders a tool with effectiveness similar to that of attackers, but within a legal and ethical framework.
In this clash, OpenAI is betting on massiveness and fewer restrictions for proven partners, hoping that it is the broad ‘white hat’ community that will become their strongest asset. This decision carries risks, but in the face of increasingly sophisticated threats, a strategy of ‘controlled openness’ may prove to be the only effective way to secure the digital future.

