YouTube has introduced a new AI-powered likeness detection tool to help creators safeguard their image and voice from deepfakes.
The tool detects videos that may have been synthetically generated to imitate a creator.
Initially, it will be available for members of the YouTube Partner Programme, with plans to expand access over time.
Once enabled, creators can find the feature in the Content ID menu, where they normally track copyright issues, and review flagged AI-generated videos through a dedicated dashboard.
How the Tool Works: Verification and Reporting
To use the tool, creators must complete a verification process to prevent misuse by imposters.
This includes:
Submitting a government-approved ID
Uploading a video selfie
Agreeing to data processing, securely stored on Google servers
After verification, YouTube categorizes videos flagged as potential deepfakes by priority, allowing creators to focus on the most concerning cases first.
Creators can then request removal or choose to archive the video for records, while YouTube reviews and acts on the complaints.
Creator Control and Privacy
Creators have full control over participation.
Through the dashboard, they can deactivate the tool at any time, and YouTube will stop scanning their likeness within 24 hours.
YouTube is still refining the tool, acknowledging that it may occasionally flag a creator’s own content.
Piloted since December 2024, this initiative reflects YouTube’s effort to protect authenticity and combat AI-driven impersonation.
In Short
YouTube’s AI likeness detection tool helps creators identify and manage deepfake content.
It requires ID verification, flags suspicious AI-generated videos, allows removal requests, and gives creators full control to enable or disable the feature anytime.