YouTube now allows you to request the removal of content generated by AI that mimics your face or voice

0


Meta isn’t the only company grappling with the growing trend of AI-generated content and its impact on its platform. YouTube also quietly made a policy change in June that will allow people to request the removal of AI-generated or other synthetic content that mimics their face or voice. The change allows people to request the removal of this type of AI content under YouTube’s privacy request process. It’s an expansion of a previously announced approach to the responsible AI agenda first introduced in November.

Rather than requesting content be taken down for being deceptive like deepfakes, YouTube wants affected parties to directly request content be taken down as a privacy violation. According to YouTube’s recently updated help document on the subject, this requires first-party claims apart from a few exceptions, such as when the affected person is a minor, does not have access to a computer, has died, or other such exceptions.

However, simply requesting removal does not mean that the content will be removed. YouTube warns that it will make its decision about the complaint based on a variety of factors.

For example, it may consider whether the content is synthetic or AI-generated, whether it uniquely identifies an individual, and whether the content could be considered parody, satire or something else valuable and in the public interest. The company also noted that it may consider whether the AI ​​content includes a public figure or other well-known individual, and whether it shows them engaging in “sensitive behavior” such as criminal activity, violence or endorsing a product or political candidate. The latter is particularly worrisome in an election year, where AI-generated endorsements could potentially influence votes.

YouTube says it will give the content uploader 48 hours to act on the complaint. If the content is removed before that time period, the complaint is closed. Otherwise, YouTube will begin a review. The company also warns users that removal means removing the video from the site completely and, if applicable, removing the person’s name and personal information from the video’s title, description, and tags. Users can also blur the faces of people in their videos, but they cannot make the video private to comply with a removal request, as the video can be set back to public status at any time.

The company did not widely advertise the policy change, though in March it introduced a tool in Creator Studio that allows creators to tell when real-looking content was created with altered or synthetic media, including generative AI. It also recently began testing a feature that would allow users to add crowdsourced notes that provide additional context on a video, such as if it is meant to be a parody or if it is misleading in some way.

YouTube is not against the use of AI, having already experimented with generative AI itself, including comment summaries and conversational tools to ask questions about videos or get recommendations. However, the company has previously warned that simply labelling AI content will not protect it from removal, as it must still follow YouTube’s Community Guidelines.

In case of privacy complaints on AI content, YouTube will not be quick to punish the original content creator.

A company representative shared last month on the YouTube Community site, where the company directly updates creators about new policies and features. “For creators, if you receive a notice of a privacy complaint, keep in mind that privacy violations are different from Community Guidelines violations and receiving a privacy complaint will not automatically result in a violation,” a company representative said.



Source link

About Author

Leave a Reply

Your email address will not be published. Required fields are marked *