Google plans to give its advertising clients more control over where their ads appear on YouTube and the Google Display Network, which posts advertising to third-party websites.
It announced the move in a blog post from its European business after major brands pulled ads from the platform because they appeared against offensive content, such as videos promoting terrorism or anti-Semitism.
The U.K. government, the Guardian newspaper and France’s Havas (the world’s sixth-largest advertising and marketing company) pulled ads from Google and YouTube on Friday after failing to get assurances from Google that the ads wouldn’t appear next to offensive material. Havas’ clients include mobile network O2, Royal Mail Plc, the BBC, Domino’s Pizza and Hyundai Kia.
The action does not, so far, affect any clients outside the UK and has been called “a temporary move” be Havas.
The moves follow a Sunday Times investigation that revealed ads from many large companies were appearing alongside content from extremists such as white nationalist David Duke, and similar sites.
There’s a growing blowback against automatic, programmatic advertising which seemingly cannot stop mainstream brands from appearing against extremist and offensive content. The main culprit is AdX, Google’s DoubleClick Ad Exchange Service, which uses programmatic trading.
Martin Sorrell, the founder and chief executive officer of WPP, the global advertising firm, said in a statement that Google and Facebook have “the same responsibilities as any media company” and can’t “masquerade” just as simple technology platforms. Google, with YouTube and its DoubleClick ad service, as well as Facebook accounts for close to 85% of digital ad spend in the UK.
He confirmed WPP’s GroupM, which buys advertising, is in talks with Google “at the highest levels to encourage them to find answers to these brand safety issues.”
Ronan Harris, Google’s UK managing director, said in the blog post that Google removed nearly 2 billion offensive ads from its platforms last year and also blacklisted 100,000 publishers from the company’s ad sense program, but admitted “we don’t always get it right.”
Ads for the Guardian’s membership scheme have appeared alongside a range of extremist material after an agency acting on the media group’s behalf used Google’s AdX ad exchange. David Pemsel, the Guardian’s chief executive, wrote to Google to say that it was “completely unacceptable” for its advertising to be misused in this way.
As special marketing site Marketingland recently pointed out, Google has been addressing fake publishers that impersonate well-known news outlets or make up clickbait headlines — it but has not been looking at misinformation, hoaxes and conspiracy theories.
Last fall, Google updated its AdSense “Misrepresentative content” to address the problem of “fake news”. It said it had taken 200 sites permanently off its network and blacklisted 340 sites for violations including misrepresentation. But there are 2 million AdSense publishers and many indulge in click-bait headlines, simply because users are, well, clicking on them. Google therefore profits from ads served on thousands of sites which promote propaganda, conspiracy theories, hoaxes and basic lies.
In that announcement, it was assumed Google would stop allowing ads to be served against misinformation stating that sites that were “deceptively presenting fake news articles as real” would be in violation. But Google quietly removed its reference to “fake news” at some point between December and January.
But Marketing Land confirmed with Google that the policy was not intended to address fake news because it doesn’t look at whether an individual article is true or not; it looks at whether the publisher is misrepresenting itself.
This means the sites built by Macedonian teenagers to capitalise on crazy stories associated with Trump, employing Adx adverts, would be in violation, because they were concealing who they really were. But the “Pizzagate” stories about Hilary Clinton, which could well have affected the outcome of the US election, wouldn’t be flagged, even though they were made up.
Google’s advertising policy is designed to address publishers not the content itself, hence why so many extremist web sites, which are quite open and public about who they are (and therefore not misrepresenting themselves as publishers), are profiting from fake news.