YouTube Now Requires Creators to Disclose AI Content
YouTube now requires content creators to label AI-generated content. The label applies only to AI-generated content that appears realistic and might be mistaken for a real person, event, or place.
Creators can see the new setting in Creator Studio while in the process of uploading or posting content. A checklist will appear asking them to disclose “altered or synthetic” content that looks realistic. That includes things like altering footage of real events and places, showcasing a real person saying or doing something they didn’t, or a realistic-looking scene that didn’t happen.
On the viewers’ side, the AI label will appear in the expanded description for most videos. But for sensitive content like health, elections, or finance, YouTube might show a label visible on the video itself. Along with the AI label in the descriptions, viewers will see additional information that stress “sound or visuals were significantly edited or digitally generated.”
YouTube won’t require creators to disclose AI usage unless the content can mislead viewers into thinking it’s realistic. In use cases for script generation, automatic captioning, and content ideation, creators don’t need to disclose AI use.
Additionally, users won’t have to disclose AI use when the content looks “clearly unrealistic, animated,” and have inconsequential alterations such as lightning, color adjustments, beauty filters, or other effects.
The AI disclosure requirement comes at a time when generative AI tools are becoming more sophisticated and people have a hard time distinguishing real content from fake content.
The number of malicious actors that create AI content to mislead the public is on the rise. Recently, fake Taylor Swift explicit images were circulating on X. President Joe Biden’s deepfake robocalls discouraging people from voting in the primary elections is another example of the negative impact of realistic AI content, especially in a year where four billion people across 40 countries, including the US, are set to vote in elections.
YouTube promised to establish enforcement measures for creators who “consistently choose not to disclose” realistic AI content. The video sharing platform didn’t specify what consequences these creators could face, only noting that it “may add a label even when a creator hasn’t disclosed it.”
Viewers will start seeing AI labels across YouTube videos in the coming weeks – first on the mobile app, followed by desktop and TV.