In the US, mid-term elections for the US Congress are coming up soon. Thus, the election advertising machine is running at full speed on social media. But it often contains misinformation, especially on TikTok and Facebook.
Election advertising on social media is the same. Because in theory, sites should be thoroughly vetted before publishing. However, election ads – especially in the US – have a long history of misinformation.
And apparently that hasn’t changed yet. Come to an end A hearing by the NGO Global Witness and the Committee on Cybersecurity for Democracy (C4D) at New York University.
While “TikTok and Facebook failed to expose election misinformation in the US”, YouTube has “succeeded” in fighting misinformation.
About the investigation
After misinformation dominated the 2020 US elections, there are “widespread fears” that such content will reappear in the upcoming election.
That’s why this research was designed to show how “the three most widely used social media platforms in America” deal with misinformation ahead of the US midterm elections.
The ad content tested contained grossly incorrect information such as an incorrect date for Election Day. The information was included “with the intention of discrediting the electoral process and undermining the integrity of the elections”.
All ads we submit violate Meta, TikTok and Google’s election advertising policies.
Global Witness and C4D conducted their research using English and Spanish language content. Advertisements posted are not marked as political.
Here’s how TikTok and Facebook fared
The scientists served the same ads to all three sites in their study. These are based on examples from, among others, the Commission on Elections FEC and the Cybersecurity and Infrastructure Protection Agency.
TikTok and Facebook and YouTube each faced ten ads in English and ten in Spanish. Half of them contained false information about the election, while the other half were aimed at delegitimizing the election process.
Global Witness and C4D submitted ads in five particularly competitive states: Arizona, Colorado, Georgia, North Carolina and Pennsylvania.
After the sites let us know if the ads were accepted, we removed them to prevent them from being published.
TikTok is the worst in the study. The platform accepted 90 percent of ads with false or misleading election information.
Facebook has been “only partially successful” in identifying and removing misleading election ads. YouTube, on the other hand, was able to detect ads and block the relevant channels behind them.
This is what TikTok and Facebook are saying about the allegations
While Google did not provide a comment to the scientists, TikTok and Facebook commented on the results.
A TikTok spokesperson said, “TikTok is a place for authentic and fun content, which is why we’re banning and removing election disinformation and paid political ads from our site.”
The Company appreciates the feedback from this study. It helps to “continually improve processes and guidelines.”
Meta criticized the “very small sample of ads” in the study. This is not representative considering the large number of political ads the group checks every day around the world.
Different rules apply in Brazil
During the presidential election campaign in Brazil in August, these results were very different. Again, Global Witness conducted a similar investigation.
Facebook approves all ads submitted with election misinformation here. After confronting the team’s results, 20 to 50 percent of false ads were still found.
YouTube doesn’t show the same page in Brazil as it does in the US. Because the Google subsidiary also recognized 100 percent of the fake ads submitted.
This shows the huge difference in enforcement efforts in high-profile national elections: in the US, all of our false ads were rejected, while in Brazil they were approved, even though the election disinformation was very similar and the investigations took place at the same time. Time.
“Amateur coffee fan. Travel guru. Subtly charming zombie maven. Incurable reader. Web fanatic.”