X

Google SynthID system will watermark AI generated images

Featured image for Google SynthID system will watermark AI generated images

AI-generated images have always been a major cause of concern for experts and governments due to the potential for threat actors to exploit them to disseminate misinformation and hatred. However, Google might have found a solution, as the company has announced a new system called SynthID, which automatically adds a digital watermark to AI-generated images. 

Developed by Google’s subsidiary DeepMind, the SynthID tool’s primary function is to embed an invisible watermark into images generated by Imagen, Google’s text-to-image model. Additionally, DeepMind explains that it leverages two AI models, one for watermarking and another for identification, both trained on a diverse dataset of images. Moreover, when talking about functionality, the tool employs a tiered classification approach based on three levels of digital watermark confidence: detected, not detected, and possibly detected.

Advertisement
Advertisement

However, it is important to note that the system is not 100% accurate, but it can effectively differentiate between images that likely contain watermarks and those that do not.

“While generative AI can unlock huge creative potential, it also presents new risks, like enabling creators to spread false information — both intentionally or unintentionally. Being able to identify AI-generated content is critical to empowering people with knowledge of when they’re interacting with generated media and for helping prevent the spread of misinformation,” reads DeepMind’s blog post.

A step in the right direction

This announcement of this tool follows a White House meeting in July, during which the US government underscored the need for “watermarking audio and visual content to help make it clear that content is AI-generated.” Additionally, governments around the globe have started implementing similar measures, including China, which recently issued regulations mandating generative AI vendors to label their output, encompassing both textual and visual content.

Furthermore, similar to Google’s approach, several other influential companies within the AI industry have committed to watermarking AI-generated content. Microsoft, for instance, has pledged to watermark AI-generated images and videos utilising cryptographic techniques. Additionally, Midjourney has established guidelines that incorporate indicators denoting content produced through AI generative tools.