Google has unveiled SynthID Detector, a new portal for detecting AI-generated content. It is an extension of its earlier AI image tagging technology, which now also includes text, audio and video. The new tool analyses whether content generated by models such as Gemini, Imagen, Lyria or Veo contain invisible SynthID watermarks.
Google boasts that more than 10 billion items have already been tagged with this system. Combined with an open-source version of the mechanism for text and integration with NVIDIA Cosmos, the company is clearly moving towards establishing its own standard for AI content tagging. The partnership with GetReal Security is expected to further increase SynthID’s recognition and interoperability in the market.
Also new is that the portal is being made available to a select group of users – journalists, academics and media professionals – suggesting that Google wants to convince opinion leaders and industry ethics watchdogs first before the tool goes to the masses.
The conclusions? Google is not so much reacting to the problem of misinformation as trying to define it on its own terms. SynthID is not just a technology – it is also an attempt to influence the shape of future AI regulations. Through open-source components and ecosystem partnerships, Google is positioning itself as a provider of trust infrastructure for AI-generated content.
The question is whether the market – and the competition – will accept this model or look for more independent solutions.