SAN FRANCISCO, ― YouTube said it will soon allow users to request that artificial intelligence-created imposters be removed from the platform, and will require labels on videos featuring realistic-looking “synthetic” content.
New rules aimed at AI-generated video material will go into force in the coming months as fears mount over the technology being abused to promote scams and misinformation, or even to falsely depict people appearing in pornography.
“We’ll make it possible to request the removal of AI-generated or other synthetic or manipulated content that simulates an identifiable individual, including their face or voice,” YouTube product management vice presidents Emily Moxley and Jennifer Flannery O’Connor said in a blog post.
In evaluating removal requests, the Alphabet-owned site will consider whether videos are parodies and whether the real people depicted can be identified.
YouTube also plans to start requiring creators to disclose when realistic video content was made using AI so viewers can be informed with labels.
“This could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do,” Moxley and O’Connor said in the post.
“This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials.”
Video makers violating the disclosure rule may have content removed from YouTube or be suspended from its partner program that shares ad revenue, according to the platform.
“We’re also introducing the ability for our music partners to request the removal of AI-generated music content that mimics an artist’s unique singing or rapping voice,” Moxley and O’Connor added according to AFP.