Meta Platforms Inc. has removed or marked more than 795,000 pieces of disturbing content for violating their policies in Arabic and Hebrew after it was rebuked by the European Union for not doing enough to tackle disinformation on its platforms.
(Bloomberg) — Meta Platforms Inc. has removed or marked more than 795,000 pieces of disturbing content for violating their policies in Arabic and Hebrew after it was rebuked by the European Union for not doing enough to tackle disinformation on its platforms.
The company said it’s working with fact checkers who speak Hebrew and Arabic, blocking certain hashtags and taking other measures after European Commissioner Thierry Breton warned Chief Executive Officer Mark Zuckerberg and other social media leaders that their platforms were responsible for a surge of illegal content regarding the Israel-Hamas war.
In the days since the violent attack on Oct. 7 by Hamas against Israel, Meta has removed seven times as much content for violating its policies in Hebrew and Arabic compared with the two previous months, the company said in a statement on Friday.
Meta said Hamas, which is designated as a terrorist organization by the US, EU and UK, is banned from its platforms and any praise or substantive support for the militant group will be removed. The company will, however, continue to allow social and political discourse, such as news reporting, human rights-related issues, or academic, neutral and condemning discussions.
“Expert teams from across our company have been working around the clock to monitor our platforms, while protecting people’s ability to use our apps to shed light on important developments happening on the ground,” Meta said in the statement.
Unverified photos and videos depicting violence have proliferated across social media. Users on X, the platform formerly known as Twitter, have claimed videos from video games were eyewitness accounts and hidden news links have made it difficult for users to consume breaking news.
Read more: Israel-Hamas Conflict Was a Test for Musk’s X, and It Failed
The war is one of the first major tests of the EU’s Digital Services Act, which went into force earlier this year and requires companies to hire more content moderators and use mitigation efforts to reduce the spread of misinformation. Companies that don’t comply face a fine of as much as 6% of annual revenue, or could be banned.
Meta said it’s also lowering the threshold at which its technology will step in to avoid recommending violating content and is reducing the visibility of potentially offensive comments under posts on Facebook and Instagram.
The new policies include restrictions on the use of Facebook and Instagram Live for people who have previously violated policies. The company said it’s taking any threat to broadcast hostages taken by Hamas seriously and will remove published content and accounts behind it.
More stories like this are available on bloomberg.com
©2023 Bloomberg L.P.