YouTube will required disclosure of AI Content

Share post:

YouTube is set to implement new policy changes next year, requiring creators to disclose the use of generative AI in videos, especially for content depicting sensitive topics such as politics and health issues.

These changes are a response to the rapidly advancing capabilities of generative AI in creating realistic-looking videos.

Under the new policies, YouTube will:
– Require creators to disclose if generative AI has been used to create scenes that depict fictional events or show real people saying things they did not actually say.
– Allow individuals to request the removal of content that simulates an identifiable person, including their face or voice. This removal request, however, will not be automatically granted, with a higher threshold for moderation applied to satire, parody, or content involving public figures.
– Establish a separate process for music industry partners to seek the removal of content that imitates an artist’s unique singing or rapping voice.
– Ensure full disclosure of any generative AI tools used in YouTube’s own content production.

The disclosure requirement is mandatory for creators, and failure to comply could lead to content removal or other penalties. YouTube emphasizes that while AI can enable powerful storytelling, it also has the potential to mislead viewers, particularly if they are not aware that the content has been altered or synthetically created.

The manner in which AI usage is disclosed to viewers will depend on the sensitivity of the content. For most videos, the disclosure will appear on the video’s description screen. However, for videos addressing sensitive topics like politics, military conflicts, and health issues, YouTube plans to make these labels more prominent.

YouTube also noted that all its standard content guidelines, including those governing violence and hate speech, will apply to AI-generated videos. This move by YouTube reflects a growing awareness of the ethical implications and potential risks associated with AI-generated content, particularly in the context of misinformation and the integrity of online information.

Sources include: Axios )

SUBSCRIBE NOW

Related articles

Meta Starts Federal Antitrust Trial That Could Lead To A Breakup of the Company

The Federal Trade Commission's (FTC) antitrust trial against Meta Platforms Inc. commenced on April 14, 2025, in Washington,...

Trump Grants 75-Day Extension for TikTok Amid Trade Tensions

President Donald Trump has signed an executive order extending the deadline for TikTok's parent company, ByteDance, to divest...

Gen Z’s Love-Hate Relationship with Social Media Revealed in New Survey

A recent survey has uncovered a surprising trend among Gen Z adults: nearly half wish that popular social...

You’re not crazy – your smart phone could be listening to you

If you have every heard someone say that they'd just had a conversation on their smart phone only...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways