Offending posts by known users of the platforms are deleted less frequently. The investigative committee is calling for this preference for prominent users to be discontinued.
An independent commission of inquiry has asked online networks Facebook and Instagram to stop favoring high-profile users. The committee set up by parent company Meta concluded in a report published on Tuesday that otherwise Facebook and Instagram would not live up to their claims to respect human rights.
Meta provides additional screening for the entries of high-profile users on its platforms if they break the rules on prima facie misinformation or hate speech. While posts from “normal” users are deleted immediately if in doubt, an additional level of human verification is provided to so-called superusers according to Meta.
Responsibility for human rights in the meta
The commission of inquiry said that this system was in fact intended to help promote Meta’s human rights obligations. “But the program appears to be more geared towards meeting business needs.” Because the users who benefit from this “additional protection” will be “selected largely according to commercial interests”.
At the same time, the regulation means that “content that could have been removed quickly remains available for a longer period of time and may cause harm,” the report continues. Meta does not do justice to its responsibility in the field of human rights. Additionally, Meta fails to demonstrate that the private review led to more accurate decisions to remove content.
The investigative committee demanded that content that violated the definition rules in the initial “extremely dangerous” classification “should be removed or hidden while we conduct further review.” “Such content should not be allowed to remain on the platform and gather views just because the person who posted it is a colleague or celebrity.”
“Total coffee aficionado. Travel buff. Music ninja. Bacon nerd. Beeraholic.”