Tell Meta:
Stop Silencing Palestine

banner image for emergency story
Press Release 14 March 2024
2021 Campaign (archive)

We renew our call to Meta to stop its systemic censorship of Palestinian voices by overhauling its content moderation practices and policies that continue to restrict content about Palestine. Two years after our initial campaign, our demands remain unmet. Given the ongoing conflict, the urgency for Meta to address our—now updated—recommendations is greater than ever.

In 2021, during Israel’s forced evictions of Palestinian families in Jerusalem, we witnessed widespread censorship of Palestinians and their content across Meta’s—then Facebook—platforms. From incorrectly classified and flagged keywords and posts, to the removal of Instagram Stories reporting about the situation on the ground, the censorship was swift and systemic.

In response, a global coalition of organizations launched a campaign, backed by prominent signatories, demanding that Meta stop silencing Palestinians and voices advocating for Palestinian rights on their platforms.

Two years later, our demands remain unmet, despite being echoed by Meta’s own Oversight Board. Since Hamas’ attack on Israel on October 7, Meta’s biased moderation tools and practices, as well as policies on violence and incitement and on dangerous organizations and individuals (DOI)—the catalysts behind the censorship of Palestinian and other marginalized voices across the region—have led to Palestinian content and accounts being removed and banned at an unprecedented scale.

Over the past few weeks, we have documented countless instances of digital repression of Palestinian and Palestine-related content online, especially on Meta’s platforms. We’ve seen scores of journalists’ accounts suspended, content on Gaza incorrectly removed, comments and live-streaming restricted, and numerous reports of people being shadowbanned for posting about the crisis in Gaza and content about Palestine more generally. For example, on October 15, Meta permanently banned the Arabic and English Facebook pages of Quds News Network, the largest Palestinian news with over 10 million followers, citing its discriminatory and opaque policy on Dangerous Organizations and Individuals (DOI) as justification.

The silencing of Palestinian voices is particularly egregious in the context of the ongoing information blackout and the unprecedented killing of journalists in Gaza. At the time of writing, 63 journalists have been killed, in what amounts to the deadliest month for journalists since data collection began in 1992, according to the Committee to Protect Journalists. Access Now reported that, in October, Gaza’s internet traffic decreased by 80%, with 15 of the 19 internet service providers in Gaza experiencing complete shutdowns. Starting on November 16, all telecommunications services were down for 33 hours after fuel ran out, with service provider Paltel only announcing partial restoration once a limited quantity of fuel was provided through UNRWA. As of December 4th, Paltel announced that “all telecom services in Gaza Strip have been lost due to the cut off of main fiber routes.”

Meta’s actions reveal disturbing discrimination in its treatment of Palestinian users and content about Palestine, particularly when compared to its moderation of Hebrew-language content or its response to another situation of military occupation. Instagram, for example, hid comments using the Palestinian flag emoji, labeling them as “potentially offensive.” It also auto-translated user bios that included “Palestinian” and an Arabic phrase meaning “praise be to God” to mean “Palestinian terrorists are fighting for their freedom.” Meta also reduced the threshold of certainty required for ‘hiding’ hostile content originating in large parts of the Middle East from 80% to 25%. Meanwhile, hate speech, incitement to violence, dehumanization, and calls for genocide on the platform abound.

It is imperative that Meta consider the impact of its policies and content moderation practices on Palestinians and take serious action to mitigate the risk of contributing to the commission of serious crimes and human rights abuses in this conflict. The stakes are high: thirty-six UN human rights experts have warned that the “grave violations committed by Israel against Palestinians in the aftermath of 7 October, particularly in Gaza, point to a genocide in the making,” calling on the international community, including businesses, to “do everything it can to immediately end the risk of genocide against the Palestinian people.” Tech-facilitated harms, disinformation, hate speech, and censorship can further perpetuate cycles of violence and obstruct vital reporting on the ground.

In the face of unspeakable atrocities, Meta must recognize the gravity of the situation and stop silencing Palestine.

What We're Asking Meta

What we are asking Meta, listed in order of urgency:

  • Formal meeting with Meta executives: We request a formal meeting with senior Meta executives, including CEO Mark Zuckerberg, as soon as possible and no later than the end of the year, to discuss Meta’s censorship of Palestinian voices, its discriminatory content moderation policies and actions, and the disparity in its crisis response compared with other instances of armed conflict and military occupation.
  • Newsworthiness allowance: Following multiple wrongful takedowns by Meta of newsworthy content—such as the takedown of posts on the bombing of Al Ahli Hospital for violating community guidelines on nudity and sexual activity and news coverage of Hamas’ release of Israeli hostages–we reiterate that Meta’s newsworthiness allowance should apply to content related to the conduct of hostilities by the conflict parties, and that heightened scrutiny be applied when moderating content under DOI and violent/graphic content policies to avoid excessive restrictions of protected speech. We also demand full transparency regarding the criteria and conditions governing Meta’s application of its newsworthiness allowance under which content that breaches Meta’s community standards is allowed if the former’s public interest outweighs the risk of harm. In January 2023, the Oversight Board asked Meta to explain the criteria for scaled newsworthiness allowances. In August 2023, Meta released an inadequate explanation for the application of the newsworthiness allowance and did not address how it would apply in the event of a crisis such as the one ongoing in Palestine since October 7. Content relating to the current situation in Gaza and the West Bank otherwise meets a number of criteria Meta says it uses for applying the allowance.
  • Overmoderation of Arabic-language and content from users in the MENA region: Meta must immediately address the over-enforcement of its content moderation policies and eliminate bias in relation to Arabic-language content and other content from users in the MENA region, especially as the plight of Palestinians in Gaza is obscured amid rising communication blackouts and censorship, Islamophobia, and anti-Palestinian sentiment. We call on Meta to provide a full outline of the content enforcement policy and strategies that pertain to Palestine, as well as an agreement to work with civil society to ascertain justified parameters for these enforcement measures.  
  • Government request transparency: We again demand full transparency on both legal and voluntary requests made to Meta by the Israeli government, its Cyber Unit, and other governmental actors, including the EU institutions, EU Member States, and referral units. The data should include, at minimum, the number of requests received, type of content enforcement, and the platform’s compliance with such requests. Users should also be notified if their content was removed in response to a government request and should be able to appeal such a decision. In a recent case, the Oversight Board called on Meta to include government requests for content removal under the company’s Terms of Service within its transparency reporting. The Santa Clara Principles provide a set of standards on transparency and an implementation toolkit for companies. 
  • Preservation of content with evidentiary value: In the face of removal requests and automated content removal, we demand that Meta preserve content containing evidence of human rights violations on its platforms. In such a conflict the role of private companies like Meta in preserving vital human rights documentation, including documentation of crimes that may amount to war crimes, is vital for future accountability efforts.
  • Retention of critical human rights content: In the face of removal requests and automated content removal, we urge Meta to publicly clarify their retention policy in situations of crises and armed conflict. We are seeking clarity about the policy that is being finalized by Meta; in particular which actors (beyond law enforcement) can send a request for data retention, the type of data that is retained, how they are handling situations in which content has been algorithmically removed before it has been seen by anyone, and specificity around the criteria for retention. We also demand Meta considers creating an expert program to consult local and international civil society and communities that can help identify the accounts that are most relevant for the purposes of international justice and accountability. 
  • Access to critical human rights content: We urge Meta to consider research in situations of human rights crises and conflict as a matter of public interest. We ask Meta to provide access to the Content Library and API to civil society and academic researchers working in this domain, and to consider these stakeholders for future updates of this and similar products.This consultation should be specifically mindful of potential government misuse and overreach, as well as the security implications that access to this type of data may have on communities affected by violence or on those recording the footage.
  • Dangerous organizations policy: We reiterate our demand that Meta overhaul its opaque and vague DOI policy, and we demand transparency regarding any content guidelines or rules related to the classification and moderation of terrorist content under this policy.  Specifically, we call on Meta to:
    • Outline the steps Meta has taken to mitigate the negative impact of its policy enforcement on Palestinians’ rights to freedom of expression, assembly and association, non-discrimination, and access to remedy, as previously highlighted by its 2022 human rights due diligence report. 
    • Publish the full list of individuals and organizations designated under the DOI policy, as requested repeatedly by the Oversight Board, so that users can understand how the policies are applied to their content.
    • Develop and publish a clear policy on how Meta designates and de-lists individuals and organizations, and publicly disclose government requests for additions or removal from the current DOI list.
    • Make clear exceptions to the existing policy when information or communication about a designated group or individual is in the public interest as per the Oversight Board’s recommendations in the “Mention of the Taliban in the new reporting” case.
  • Algorithmic transparency: We demand that Meta be transparent about where and how  automation and machine learning algorithms are used to moderate or translate Palestine-related content, including sharing information on the classifiers programmed and used, and their error rates. We demand investigations into Meta’s most egregious automation errors, which have resulted in Palestinians being labeled as terrorists on Instagram and WhatsApp, and in the incorrect downranking, hiding, translation, and removal of content. We call for:
    • Transparency on specific automated tools and datasets used, and reasoning for the decisions they make;  
    • A commitment to work with civil society in researching and moving away from over-reliance on Large Language Models (LLMs) via natural language processing (NLP). Given the complex language and cultural context, as well as the inherent risks of bias, it is irresponsible to rely on just one language model. 
    • Transparency on the training provided to human moderation teams, including lists of flagged classifiers of content. The use of automated systems and moderation systems is only as effective as the humans behind the technologies and processes. 
  • Shadowbanning: We demand an independent audit into Meta’s content curation actions and its ranking and recommender systems, including whether Meta has implemented any “break-the-glass” (emergency) measures during this crisis, which have led to the down-ranking of posts’ reach, engagement, and visibility on Facebook and Instagram.
  • Human rights risks: We demand that Meta conduct a heightened human rights due diligence into the impact of its content moderation and curation actions, as well as its products and services more broadly, during this crisis. We also urge Meta to outline steps it has taken to mitigate the risk of contributing to gross human rights violations, including the risk of genocide, or being complicit in exacerbating existing tensions or conflicts, noting the role played by Meta in contributing to atrocities, such as that committed against the Rohingya in Myanmar, as well as the fact that Meta’s human rights impact assessments (HRIAs) have often retrospectively revealed preventable content moderation failures. 

This is not a new phenomenon, it’s part of a systematic effort that contributes to silencing Palestinian voices. We’ve been experiencing different forms of censorship—between account suspensions, content takedowns, and shadowbanning— on different social media platforms for years now and we’ve been trying to tell social media platforms to invest more in protecting people—not only Palestinians, but everyone on their platforms, and Palestinians are no different. – Mona Shtaya, Campaigns and Partnerships Manager (MENA) and Corporate Engagement Lead, Digital Action

“Time and time again, Meta shows us how little it cares about human rights. In the face of unprecedented violence, Palestinian voices are being silenced and dehumanized again on its platforms. However, the stakes are much higher this round. The systematic censorship cannot continue business as usual. Meta must grasp the severity of the situation and immediately correct course before it’s too late.” – Marwa Fatafta, MENA Policy and Advocacy Director, Access Now

2021 Campaign Results

Our clear demands in 2021 were as follows:

  1. Public audit: A full, independent, public audit of content moderation policies with respect to Palestine and a commitment to co-design policies and tools that address deficiencies or overreach of content moderation found during the audit. Furthermore, rules should be based on existing human rights frameworks and must be applied consistently across jurisdictions.

Reflections on this demand in 2023: The BSR audit was a welcome development in answer to our original demands. However, we are yet to see its full implementation in Meta’s policies and practices. We are now demanding a public external audit of Meta’s implementation of the BSR report. We also call on whistleblowers at Meta to come forward securely and anonymously with any evidence of discriminatory  practices and policies related to Palestinian content. 

  1. Government request transparency: Complete transparency on requests—both legal and voluntary—submitted by the Israeli government and Cyber Unit, including number of requests, type of content enforcement, and data regarding compliance with such requests. Users should also be able to appeal content decisions.

Reflections on this demand in 2023: We have seen very little movement on this front. In 2022, the Oversight Board reiterated the need for this demand to be met. 

  1. Automation transparency: Transparency with respect to where automation and machine learning algorithms are used to moderate content related to Palestine, including error rates, as well as the classifiers used.

Reflections on this demand in 2023: We continue to call for transparency on this point. As detailed earlier, since October 7, our coalition has documented large numbers of  reports showing how content on Meta platforms has been “incorrectly removed,” “incorrectly hidden,” “incorrectly downgraded,” or “incorrectly translated.” Instagram, for example, auto-translated user bios that included “Palestinian” and an Arabic phrase meaning “praise be to God” to mean “Palestinian terrorists are fighting for their freedom.” In understanding where automation and machine learning tools are used for moderation and translations we call for: 

  • Transparency on specific automated tools and datasets used and reasoning for their decision-making;  
  • A commitment to work with civil society in researching and moving away from over-reliance on Large Language Models (LLMs) via natural language processing (NLP) for such a complex language and context. Given that for such a complex language and cultural context, as well as the inherent risks of bias, it is irresponsible to rely on just one language model. Transparency on the training provided to human moderation teams, including lists of flagged classifiers of content. The use of automated systems and moderation systems is only as effective as the humans behind the technologies and processes.   
  1. Dangerous organizations: Transparency regarding any content guidelines or rules related to the classification and moderation of terrorism and extremism. Companies should, at a minimum, publish any internal lists of groups classified as “terrorist” or “extremist.” Users cannot adhere to rules that are not made explicit.

Reflections on this demand in 2023: We continue to call for transparency regarding any content guidelines or rules related to the classification and moderation of terrorism and extremism, especially in regards to the Dangerous Organizations and Individuals (DOI) policy. We call for: 

  • The DOI list to be made public, to allow users to understand and adhere to the policies affecting them.
  • A clear policy for publicly disclosing government requests for additions or removal from the existing DOI list.
  • Defining a policy that states that if an entity is on the DOI list and is technically recognized as a state actor for the purposes of the DOI list, there could be exceptions on content related to these entities.
  1. Commitment to co-design: Commitment to a co-design process with civil society to improve policies and processes involving the Palestinian content. 

Reflections on this demand in 2023: We continue to ask for a commitment to a co-design process with civil society to improve policies and processes involving Palestinian content. To be meaningful, the co-design efforts must move away from extractive methods and models, and start by setting modes of engagement led  and agreed upon by civil society and impacted communities. This will require Meta to provide action plans and timelines for moving commitments and co-designed efforts forward. Without such agreements on modes of engagement, the coalition holds the right to refuse further engagement with Meta on policies and processes.