Get marketing news you’ll actually want to read
The email newsletter guaranteed to bring you the latest stories shaping the marketing and advertising world, like only the Brew can.
Google is stepping up AI regulation ahead of next year’s presidential election in the US.
Political ads will be required to disclose when they feature “synthetic” content, such as AI-generated visuals or sounds, per an update to the company’s political content policy.
According to the policy, which will roll out in November, advertising that “inauthentically depicts real or realistic-looking people or events” will need to disclose that it’s been synthetically altered in a “clear and conspicuous” fashion “where it is likely to be noticed” by viewers. The update applies to audio, image, and video advertising across Google’s platforms, ad display network, and YouTube.
Synthetic content that has been edited in ways “inconsequential to the claims made in the ad,” such as image cropping or color correction, will be exempt from the disclosures.
Some AI-generated content has already appeared in political ads ahead of next year’s presidential election. Florida Gov. Ron DeSantis’s campaign posted a video on X that appeared to include AI-generated deepfakes earlier this year, while the Republican National Committee released an ad that was “built entirely with AI imagery,” per an on-screen disclaimer.
The update to Google’s policy comes after the Federal Election Commission began exploring regulation of AI-generated deepfakes in political ads. In July, Google was one of a handful of tech companies—including Amazon, Microsoft and OpenAI—to agree to AI safeguards proposed by the Biden administration.
Read the full article here