Leaked controversial Claude Mythos model. Anthropic investigates security incident

Unauthorized access to the Claude Mythos model undermines the very foundation of Anthropic’s strategy, which is based on the strictly controlled deployment of powerful AI systems. The leak, which occurred via an external vendor on the day of the “Project Glasswing” launch, calls into question the startup’s operational security and could complicate its relationship with regulators.

2 Min Read
attacks on critical infrastructure
source: Adobe Stock

Anthropic, one of the leading forces in the artificial intelligence sector, is facing a serious image and operational challenge. As reported by Bloomberg News, the company’s most advanced model, Claude Mythos Preview, was leaked to a small group of unauthorised users. The incident comes at a crucial time for the startup, which is just positioning its technology as the foundation of a new era of cyber security.

The leak occurred on 7 April, exactly the day Anthropic announced ‘Project Glasswing’. The initiative was intended to allow selected organisations to test the Mythos model under controlled conditions, mainly to strengthen their defences against digital attacks. Meanwhile, a group of users on a private online forum gained access to the tool almost immediately after the official announcement. Although reports indicate that the model has not been used for criminal purposes to date, the fact that it is regularly used outside the manufacturer’s control raises legitimate concerns.

A spokesperson for Anthropic confirmed that the company is investigating the matter, pointing to a third-party vendor environment as the likely source of the leak. The incident could complicate Anthropic’s relationship with regulators. Mythos is a model with an unprecedented ability to identify software vulnerabilities. It is a ‘dual-use’ tool – in the hands of defenders it patches systems, but in the hands of hackers it can become a precision weapon. The loss of control of such a powerful resource, even if temporary, reinforces the arguments of advocates of strict oversight of models critical to national security. Anthropic must now prove that it can effectively protect the technology that is supposed to protect the world.

Share This Article