WASHINGTON ― Facebook and Instagram will require political ads running on their platforms to disclose if they were created using artificial intelligence, their parent company announced.
Under the new policy by Meta, labels acknowledging the use of AI will appear on users’ screens when they click on ads. The rule takes effect Jan. 1 and will be applied worldwide.
Microsoft recently unveiled its own election year initiatives, including a tool that will allow campaigns to insert a digital watermark into their ads, AP reported.
These watermarks are intended to help voters understand who created the ads, while also ensuring the ads can’t be digitally altered by others without leaving evidence.
The development of new AI programmes has made it easier than ever to quickly generate lifelike audio, images and video.
In the wrong hands, the technology could be used to create fake videos of a candidate or frightening images of election fraud or polling place violence.
When strapped to the powerful algorithms of social media, these fakes could mislead and confuse voters on a scale never seen.
Meta Platforms Inc. and other tech companies have been criticized for not doing more to address this risk.
The announcement by Meta – which comes on the day House lawmakers hold a hearing on deepfakes – isn’t likely to assuage those concerns.
While officials in Europe are working on comprehensive regulations for the use of AI, time is running out for lawmakers in the United States to pass regulations ahead of the 2024 election.
Discussion about this post