AI

Google Expands Gemini AI Verification Tool to Detect AI-Generated Videos

Google is pushing deeper into AI transparency by extending Gemini’s verification tools to video content created or edited with its own AI models. The move aims to help users quickly check whether a video was generated using Google AI, as concerns over deepfakes continue to grow.

With the update, users can upload a video to Gemini and ask a direct question: “Was this generated using Google AI?” Gemini then analyzes both the visuals and audio to look for Google’s proprietary watermark, known as SynthID. Unlike simple detection tools, the response goes beyond a basic confirmation. Gemini highlights the exact moments where the watermark appears within the video or its audio track.

Google first introduced this verification feature for images in November. That rollout was also limited to content created or edited with Google AI tools. By expanding it to video, the company is addressing a format that has become central to AI misuse and misinformation.

Watermarking, however, remains an imperfect solution. Some watermarks are easy to remove, as OpenAI discovered after launching Sora, which showcased fully AI-generated videos. Google describes SynthID as “imperceptible,” suggesting it is harder to scrub. Still, it remains unclear how resistant it is to removal or whether other platforms will reliably detect and label SynthID-tagged content.

The problem is broader than watermark strength. While Google’s Nano Banano AI image generation model inside Gemini embeds C2PA metadata, there is still no unified system across social platforms. As a result, AI-generated content can circulate without clear labels, allowing deepfakes to slip through moderation systems.

For now, Gemini’s video verification comes with clear limits. The tool supports videos up to 100 MB in size and no longer than 90 seconds. Google says the feature is available in every language and region where the Gemini app already operates.