Political adverts must disclose use of AI

In a statement to the a Google spokesperson said the rules were created in response to “the proliferation of tools that create synthetic content”.
A year before the next US presidential election, the change is scheduled for November.
Disinformation around the campaigns is feared to be amplified by AI.
In Google’s existing ad policies, manipulating digital media to deceive or mislead people about politics, social issues, or matters of public concern is already prohibited.
This update will require election-related ads to “prominently disclose” if they depict real or realistic-looking people or events through “synthetic content.”
Labels such as “this image does not depict real events” or “this video content was generated synthetically” will help flag images.
According to Google’s ad policy, false claims that undermine trust in the election process are also prohibited.
In an online ads library, Google makes information about political ads available, including who paid for them.
Election ads must disclose digitally altered content in a clear and conspicuous manner.
This would include synthetic imagery or audio showing a person saying or doing something they did not say or do, or depicting an event that never took place.
A fake picture of former US President Donald Trump was shared on social media in March. AI tools created the image.
Moreover, a deepfake video circulated in March showing Ukrainian President Volodymyr Zelensky talking about surrendering to Russia.