Fujitsu is launching the Frontria consortium, integrating more than 50 organisations to operationalise artificial intelligence security. The initiative is a direct response to increasing regulatory pressure, including the EU AI Act, and the scourge of disinformation in business.
The rapid adoption of generative artificial intelligence in the corporate environment has brought not only efficiency gains, but also unprecedented risks related to data reliability and legal security. Faced with these challenges, Fujitsu has decided to take a step beyond standard implementations by establishing the international Frontria consortium. The project, which will launch in the 2025 financial year, aims to create a standardised frontline to combat disinformation and hallucinatory AI models.
Frontria is not intended to be just a discussion forum, but a platform for technology exchange. At launch, the initiative brings together more than 50 players from key markets including Japan, Europe, North America and India. Fujitsu aims to expand this ecosystem aggressively, with plans to double the number of members to more than 100 organisations before the end of the 2026 financial year. The main goal is to reach high-risk and highly regulated sectors such as finance, insurance, media and law.
The consortium’s operating model is based on the ‘technology pool’ concept. Instead of isolated research work, Frontria members will share intellectual property, data and ready-made implementation scenarios (use cases). Fujitsu, in a leadership role, will provide partners with a trial version of its key AI Trust technologies. These tools focus on detecting forgery (deepfake), ensuring the impartiality of algorithms and verifying information sources.
The move should be read as a strategic attempt to get ahead of upcoming regulations. Implementing systems that are compliant with regulations such as the EU AI Act is becoming a business imperative for companies, not just an image issue. Through Frontria, organisations are set to gain access to proven risk mitigation mechanisms, allowing them to more securely commercialise AI-based solutions. The developer community around the project is set to accelerate this process by creating applications that translate the theoretical framework of secure AI into tangible marketable products.
