Meta has exempted some major advertisers from its usual content moderation process, protecting its multibillion-dollar business amid internal concerns that the company's systems are wrongly penalizing big brands.
According to internal documents from 2023 seen by the Financial Times, the owner of Facebook and Instagram introduced a series of “guardrails” that “protect high spenders.”
Previously unreported memos said so dead It would “prevent detections” based on how much an advertiser has spent on the platform, and that some large advertisers would instead be reviewed by humans.
One document noted that a group called “P95 spenders” — those who spend more than $1,500 a day — are “exempt from advertising restrictions” but still “will eventually be sent to manual human review.”
The memoir precedes this week advertisement CEO Mark Zuckerberg said Meta was ending its third-party verification program and relaxing automated content moderation, as it prepares for Donald Trump's return as president.
2023 documents show that Meta found that its automated systems had incorrectly flagged some higher-spending accounts for violating company rules.
The company told the Financial Times that high-spending accounts were disproportionately vulnerable to false notifications of potential breaches. It did not respond to questions asking whether any of the measures contained in the documents were temporary or ongoing.
Ryan Daniels, a spokesman for Meta, said the FT's reporting is “simply inaccurate” and “is based on a cherry-picked reading of documents that clearly state that this effort was intended to address something we have been very public about: preventing errors in implementation.”
Advertising makes up the majority of Meta's annual revenue, which reached about $135 billion in 2023.
The tech giant typically screens ads using a combination of artificial intelligence and human moderators to stop violations of its standards, in an attempt to remove material such as scams or harmful content.
In a document titled “Preventing High Spending Mistakes,” Meta said it has seven guardrails for business accounts that generate more than $1,200 in revenue over a 56-day period, as well as individual users who spend more than $960 on ads over the same period. a period.
Guardrails help the company “decide whether discovery should move to enforcement” and are designed to “suppress discoveries…,” she wrote. . . “Based on characteristics, such as the level of advertising spending.”
She gave as an example a company that is “in the top 5 percent of revenue.”
Meta told the Financial Times that she uses “higher spend” as a guardrail because this often means that a company’s ads will have a greater reach, and so the consequences can be more serious if a company or its ads are removed in error.
The company also admitted that it prevented some high-spending accounts from being disabled by its automated systems, instead sending them for human review, when the company was concerned about the accuracy of its systems.
However, it said all companies were still subject to the same advertising standards and no advertisers were exempt from its rules.
In its “Preventing High Spending Mistakes” memo, the company classified different categories of guardrails as “low,” “medium,” or “high” in terms of whether they are “defensible.”
Meta staff described the practice of having spending-related guardrails as having “low defensibility.”
Other guardrails, such as using knowledge of a company's credibility to help it decide whether to automatically act on the discovery of a policy violation, were rated as having “high” defensibility.
Meta said the term “defensible” refers to the difficulty of explaining the idea of guardrails to stakeholders, if misinterpreted.
The 2023 documents don't name the high spenders who fell within the company's guardrails, but the spending limits suggest thousands of advertisers may have been deemed exempt from the typical moderation process.
Market intelligence firm Sensor Tower estimates that the top 10 US spenders on Facebook and Instagram are: Amazon, Procter & Gamble, Temu, Shein, Walmart, NBCUniversal and Google.
Meta has generated record revenues over recent quarters and its shares are trading at all-time highs, following the company's recovery from the post-pandemic slump in the global advertising market.
But Zuckerberg has warned of threats to his business, from the rise of artificial intelligence to its ByteDance-owned rival, TikTok, which has increased in popularity among younger users.
A person familiar with the documents said the company “prioritizes revenue and profits over user safety and health,” adding that concerns had been raised internally about circumventing the standard moderation process.
Zuckerberg said on Tuesday that the complexity of Meta's content moderation system had led to “a lot of errors and a lot of censorship.”
His comments came after Trump accused Meta last year of censoring conservative speech and suggested that if the company interfered in the 2024 election, Zuckerberg would “spend the rest of his life in prison.”
Internal documents also show that Meta considered pursuing other exemptions for some higher-spending advertisers.
In one memo, Meta insiders suggested “offering more robust protection” from over-moderation to what it calls “platinum and gold spenders,” which together bring in more than half of ad revenue.
“Enforcing false positives against high-value advertisers costs meta revenue (and) erodes our credibility,” the memo said.
It proposed the option of blanketly exempting these advertisers from certain enforcement measures, except in “very rare cases.”
The memo shows that staff concluded that platinum and gold advertisers were “not an appropriate segment” for a broad exemption, because an estimated 73 percent of its enforcements were justified, according to the company’s tests.
Internal documents also show that Meta has uncovered several AI-generated accounts within categories of high spenders.
Meta has previously come under scrutiny for excluding exemptions for VIP users. In 2021, Facebook whistleblowers Francis Hogan Leaked documents show that the company has an internal system called “cross-checking”, designed to review content from politicians, celebrities and journalists to ensure posts are not removed by mistake.
According to Hogan's documents, this was sometimes used to protect certain users from enforcement, even if they violated Facebook's rules, a practice known as “whitelisting.”
Meta's Oversight Board — an independent, “Supreme Court”-style body funded by the company to oversee more difficult moderation decisions — found that the cross-verification system had left dangerous content online. She called for reform of the system, which Meta has since done.