From DC to Silicon Valley: the DMV's protests against Big Tech’s role in genocide

Shaheen Khurana is a member of Metro DC DSA and a public-interest technologist.


ON A COLD WASHINGTON, DC MORNING IN NOVEMBER 2025, the polished sidewalks outside the offices of Meta and Google resembled a political drama more than a corporate campus. Hazami Barmada, a longtime DC human rights organizer, stood with a coalition of community members from Maryland, Virginia, and across the District. They weren’t just holding signs — they were lying motionless in a “die-in,” their bodies a silent accusation against the glass towers above. 

Anti-genocide activists stage a "die-in" outside of Meta office in Washington DC.

The group has been leading demonstrations across DC since October 2023. Their charge is stark: they accuse Meta and Google of being complicit in genocide through the systematic censorship of Palestinian voices and the amplification of pro-Israeli narratives. Their protests are not rallies. Instead, they are acts of political street theater designed for the DC landscape: interrupting lobbying events, staining sidewalks near the United States State Department with fake blood, and staging graphic demonstrations outside the Israeli and United Kingdom Embassies. 

“Our marches are important, but we must engage and educate in ways that disrupt the everyday,” Barmada explained, gesturing to the diverse array of neighbors, students, and workers from across the DMV who make up her group. This disruption seeks to pierce the public consciousness and confront the policymakers, corporate employees, and diplomats who walk these streets with a reality that Big Tech companies hide. 

Protesting calling out Meta and Google employees for silencing Palestinian voices in Washington DC.

A growing body of evidence has revealed a profound and systemic bias embedded within the world’s most widespread information platforms. Human rights organizations, journalistic investigations, and internal company documents show that content related to Palestine and Palestinians has been disproportionately restricted or removed on major social media platforms. While this crackdown predates October 7, 2023, the period following Hamas’s attack saw it reach an industrial scale, resulting in a near-total blackout of Palestinian perspectives from the mainstream digital sphere.

Meta (Facebook and Instagram): Documented Takedowns and Suppression

Massive Increase in Content Removal: In the first month after October 7, a Human Rights Watch (HRW) report titled “Meta: Systemic Censorship of Palestine Content” documented a massive spike in wrongful removals and suppression on Meta platforms. HRW reviewed 1,050 cases of censorship across Instagram and Facebook and found 1,049 involved peaceful content in support of Palestine that was removed or suppressed. This included posts about casualties, humanitarian conditions, and political speech.

"Shadow banning" and Reach Limitation: A report by 7amleh, the Arab Center for the Advancement of Social Media titled “Erased and Suppressed: Palestinian Testimonies of Meta's Censorship” highlighted the “devastating” impact of Meta’s content moderation policies on Palestinian digital expression. Based on testimonies from Palestinian journalists, influencers, and media professionals, the report details widespread shadow banning (reducing a user's visibility without notification), account deletions, and restrictions on Facebook and Instagram, exacerbated since October 2023. The findings revealed a significant decline in audience engagement for Palestinian accounts. Although Meta acknowledged, on October 18, 2023, that a "bug" had restricted the reach of reshared Instagram Stories, Human Rights Watch documented persistent shadow banning reports from Palestinian users well after that date.

Hashtag Blocking and Translation Errors: Instagram automatically added the word "terrorist" to the bios of some Palestinian users whose profiles were written in English and Arabic, a bug for which Meta later apologized. HRW documented that the Instagram hashtag #AlAqsaFlood (the Hamas military operation's name) was blocked, while the Israeli hashtag #IsraelUnderAttack was promoted. The Palestinian flag symbol, used frequently around the world to express solidarity with Palestine, has also been subject to censorship on Instagram and Facebook. HRW reported cases where Meta hid the Palestinian flag emoji from comment sections or removed it on the basis that it “harasses, targets, or shames others.” 

Deactivation of Key Tools: In late October 2023, Meta turned off a key, widely used transparency tool — CrowdTangle — for users in the Middle East. This tool allowed researchers, journalists, and watchdog groups to monitor how content — including posts, videos, and public interactions — spread across Facebook and Instagram in real time. Essentially, it was a vital window into the platform's inner workings, particularly for tracking misinformation, hate speech, and the virality of different narratives. This local deactivation foreshadowed Meta’s permanent shut down of CrowdTangle in 2024, despite urgent pleas from dozens of research and civil society organizations.

Pro-Palestinian activists protesting at Meta offices in Washington DC.

For DMV activists sharing news with family abroad, posts vanished, hashtags like #Gaza were restricted, and the word “terrorist” was automatically appended to friends’ profiles.

In a November 2023 letter to Meta CEO Mark Zuckerberg, Sen. Elizabeth Warren (D-Mass.) called on the company to disclose details about content moderation practices that have “exacerbated violence and failed to combat hate speech,” citing reporting by The Intercept. The letter reflected that it was more important than ever that “social media platforms do not censor truthful and legitimate content, particularly as people around the world turn to online communities to share and find information about developments in the region.”

YouTube/Google: Takedowns, Demonetization, and Algorithmic Suppression

Aggressive Removal of Journalistic and Eyewitness Content: In November 2025, The Intercept reported that YouTube quietly erased more than 700 videos documenting Israeli human rights violations from three accounts belonging to prominent Palestinian human rights groups: Al-Haq, Al Mezan Center for Human Rights, and the Palestinian Centre for Human Rights. Human rights advocates warn that deleting this material not only silences Palestinian voices but also destroys crucial legal evidence that could be used in international court cases, including those before the International Criminal Court (ICC). Researchers fear that the absence of a reliable digital archive will severely disrupt ongoing documentation efforts and future accountability mechanisms. YouTube has repeatedly faced accusations of double standards — removing Palestinian content while leaving pro-Israeli propaganda untouched.

Mass Demonetization: WIRED outlined in November 2023 how Palestinian creators, journalists, and other media figures are locked out of Google’s online economy. YouTube’s revenue sharing for creators and other Google services are shut off or hard to access for people in Palestinian territories. This has had a severe financial impact on independent journalists, content creators, and media outlets. 

Human rights activists outside Google office in Washington DC calling out their complicity in Israel's genocide.

Pressure Campaigns and Automated Flagging: Pro-Israeli digital networks, Israeli ministries, cyber units, and coordinated volunteer networks reportedly mobilized online campaigns encouraging users to mass-report Palestinian content, exploiting automated moderation systems to remove images of Israeli human rights violations. “The platforms frequently comply with Israeli government takedown requests, often at very high rates, and rely heavily on automated systems rather than human review,” said Ahmad Qadi, head of monitoring and documentation at 7amleh. “This accelerates the large-scale removal of Palestinian content while limiting transparency and accountability.”

Activists inside the Google campus in Washington DC.

TikTok: From Amplification to Suppression

Like YouTube, Google, and Meta, TikTok has been criticized for "shadow banning" Palestinian content while allowing anti-Arab hate speech to proliferate. 

After October 7, 2023, TikTok gained global prominence for amplifying firsthand accounts and on-the-ground reporting of Israel's military actions in Gaza, including the killing of Palestinian civilians. TikTok was one of the few places you could see videos of Israel’s genocide and the grotesque famine taking place in Gaza.

Shortly after, US authorities moved aggressively to ban the platform, citing national security risks and concerns over Chinese influence on user data and content algorithms, while activists suggest a deeper agenda may have been aimed at curbing pro-Palestinian content on the platform. The crackdown ultimately forced a restructuring that placed majority ownership in American hands. Since the acquisition of TikTok in January 2026 by an investor group headlined by Oracle Corporation — a tech giant co-founded by billionaire Larry Ellison, a staunch supporter of President Donald Trump and Israeli Prime Minister Benjamin Netanyahu — users have expressed grave concerns about censorship of posts and the removal of damning videos about Israeli actions. 

As Israel's war on Gaza intensified in 2024, so did the silencing of Palestinian voices online in one of the most documented episodes of platform bias in history. The repression continues to this day: a Global Voices report warns of ongoing "digital erasure" targeting Palestinians.

"The platforms claim to be neutral spaces," said one of Barmada’s fellow activists at the November 2025 die-in. "But when their systems consistently silence one side, hide atrocities, and bankrupt those reporting them, they are taking a side. They are providing cover for a genocide."

The Web of Complicity: How Big Tech Powers Genocide

Beyond digital erasure, Big Tech provides the advanced technological backbone enabling what the United Nations has identified as genocide in Gaza: the deliberate and systematic destruction, in whole or in part, of a national, ethnic, racial, or religious group.

At the heart is "Project Nimbus," a $1.2 billion cloud computing and artificial intelligence contract shared by Google and Amazon with the Israeli government and its Ministry of Defense. Whistleblowers report that it fuels military AI like the targeting systems “Lavender,” "Where’s Daddy," and “The Gospel,” used to track Palestinians and target civilian infrastructure in Gaza — resulting in unlawful killings and mass civilian casualties that violate international humanitarian law and must be investigated as war crimes, according to Amnesty International. Google and Amazon’s own employees have warned in open letters that this technology allows surveillance of and unlawful data collection from Palestinians, and facilitates the expansion of illegal Israeli settlements.

Microsoft, meanwhile, provides "near limitless storage capacity" to Israel, chiefly through its Azure cloud computing platform — used across Israel’s air, ground, and naval forces as well as its intelligence directorate. This vast digital repository is being weaponized to collect, archive, and analyze the phone communications of millions of Palestinians in Gaza and the West Bank, enabling mass surveillance and data-driven targeting on an unprecedented scale. An AP news investigation revealed that the Israeli military's use of Microsoft and OpenAI AI models skyrocketed nearly 200-fold following October 7, 2023, with the technology being used to sift through vast surveillance data to identify potential targets. The investigation also revealed details of how AI systems select targets and ways they can go wrong, including faulty data or flawed algorithms. AP’s reporting linked AI-driven targeting to the wrongful killing of civilians, including a Lebanese family with children.  

The complicity extends to the most personal of technologies. Google Photos’ facial recognition capabilities are reportedly being deployed to identify and target Palestinians. This has led to the surveillance, arrest, and killing of countless people in Gaza and the West Bank. Simultaneously, Google maintains a reported $45 million contract with the Israeli government, led by Netanyahu, to promote government messaging, including ads that have disputed reports of famine in Gaza. Critics say the campaign promotes Israeli propaganda on YouTube, designed to whitewash a man-made famine and the blockade of lifesaving aid into Gaza. 

In Gaza, Palantir’s data-integration platforms are reportedly instrumental for the Israeli military. By fusing surveillance feeds, drone data, biometric records, and intelligence sources into a single, AI-driven "battlefield operating system," Palantir enables the kind of large-scale, targeted operations and population control that UN Special Rapporteur Francesca Albanese’s critical report cites as evidence of complicity in genocide. The company’s tools don’t just observe — they optimize the logistics of siege and the identification of targets, making the genocide more efficient.

These technologies are NOT building a better future, as their companies' CEOs tend to claim. They are building a digital cage; a system to track, monitor, and dehumanize an entire population. The catastrophic consequences are written in the rubble of Gaza: children buried, infants starved, entire families erased from the civil registry and from the earth. 

Like Meta and Google, Palantir’s role in the ongoing Israeli genocide has made it a target for organizers. In Washington, DC, Barmada and her group staged a die-in outside of the Palantir offices in July 2025. The group had protested Palantir several times before, at their offices and at several tech conferences. Activists chanted and unfurled banners in the lobby of Palantir’s building; outside, activists lay on the pavement, their faces and bodies marked with fake blood. 

Anti-genocide protestors outside Palantir offices in Northern VA.

“Palantir security guards got extremely violent,” said Barmada. The guards attacked seven protestors, aggressively throwing a 67-year-old Palestinian woman against the concrete.

When several of these corporations’ own workers raised the alarm, the companies chose retaliation over conscience. Google fired 28 employees for peacefully protesting Project Nimbus as part of the “No Tech for Apartheid” campaign. Microsoft dismissed staff in Seattle for standing against the company’s war-related contracts. Their message was clear: the contracts are more valuable than the ethical conviction of their workforce.

Activist dumping "fake blood" in front of Palantir offices.

The Imperial Boomerang: From IDF to ICE

The deployment of these technologies exemplifies the "imperial boomerang" effect, where tools of repression, first tested and refined on occupied populations abroad, are brought home to target domestic communities. The very same AI-driven surveillance, facial recognition, and data-harvesting systems used by the Israeli Defense Forces (IDF) to monitor, control, and harm Palestinians in Gaza and the West Bank are now being deployed by US Immigration and Customs Enforcement (ICE) to track, detain, and deport immigrants within the United States. Palantir is the prime contractor for ICE, providing the FALCON platform that powers its arrest and deportation machine. 

Originally funded by the CIA’s venture arm, Palantir has become a cornerstone of modern digital repression. The same technological core used in Palestine — this architecture of mass surveillance, predictive policing, and networked detention — has been aggressively deployed within the United States. It enables ICE to data-mine from countless public and private sources, surveilling and targeting immigrants and their communities (and their non-immigrant allies) with terrifying precision. An Amnesty International report from August 2025 also found that AI products from Palantir were used by the Department of Homeland Security to target non-citizens who speak out for Palestinian rights.

Demonstration in Washington DC where human rights activists make the connection between ICE and IDF.

Palantir is not the only culprit. Amazon’s facial recognition, marketed to police, and Microsoft’s Azure cloud, which hosts deportation databases, form part of the same ecosystem. Together, these firms are constructing a two-tiered digital state: one where powerful governments and agencies wield total information awareness while marginalized populations — whether Palestinians under bombardment or immigrant communities in the US — live under the constant threat of algorithmic targeting. ​​By selling the tools of domination to any agency with a budget, these companies are not merely profiting from human suffering — they are actively wiring repression into the infrastructure of our societies, proving that the assault on one group’s humanity is ultimately an assault on the foundations of liberty for all.

OpenAI and Anthropic, two of the world's leading AI companies, have actively positioned their technologies for integration into US military operations, a devastating fulfillment of what critics have long warned: that "ethical AI" is merely a branding exercise masking complicity in state violence for profit. The AI-driven “kill chain” technologies deployed in Gaza and Lebanon, which enabled one of the most destructive bombing campaigns in modern history, have now expanded even further. Anthropic's for-profit Claude AI has been integrated with Palantir's Maven Smart System to assist in recent US strikes on Iran, transforming algorithms into weapons of death. This escalation persists despite overwhelming evidence that AI models are unreliable, prone to catastrophic error, and fundamentally opaque — a black box placed over the question of who lives and who dies. AI researchers increasingly warn that even the most advanced "frontier models" cannot operate within the laws of war, yet Silicon Valley's for-profit giants continue selling them to militaries anyway. The industry is not merely enabling violence, it is actively designing the infrastructure of future mass slaughter for profit.

This is how genocide abroad undermines democracy at home. The technologies of exclusion and eradication normalize a world where rights are conditional, privacy is extinct, and due process is overridden by predictive scores. By selling the tools of domination to any agency with a budget, these companies are not merely profiting from human suffering — they are actively wiring repression into the infrastructure of our societies, proving that the assault on one group’s humanity is ultimately an assault on the foundations of liberty for all.

Reactions and Backlash: A Global Movement Demands Accountability

The growing public awareness of Big Tech's role in surveillance and warfare has ignited a formidable backlash, marked by grassroots mobilization, a rising tide of ethical alternatives, and a profound crisis of legitimacy for the industry's largest corporations. 

Digital Exodus and Ethical Alternatives: Users and creators are voting with their feet, leading to a significant digital exodus from platforms perceived as complicit. The most symbolic success story is the rapid rise of UpScrolled, a social media platform founded by Palestinians after TikTok was acquired by Ellison and company. In a direct rebuke to mainstream censorship, UpScrolled recently surged into the top 10 of the Apple App Store following its launch, powered by a simple, powerful promise: "no censorship, no surveillance." Its explosive growth demonstrates a clear public hunger for platforms built on principles of transparency and justice. 

UpScrolled is just one project incubated by a broader counter-movement, Tech for Palestine. Founded in January 2024 by Irish tech entrepreneur Paul Biggar, who was removed from his own company's board after publicly condemning industry leaders for "actively cheering on the genocide," the organization now supports over 70 initiatives reclaiming technology for human rights. 

This grassroots shift extends beyond social media to the core of AI itself. Projects like Thaura.ai are being developed explicitly as ethical AI engines, designed to provide analytical tools for human rights documentation and humanitarian response in Gaza. Created as a direct counter to the weaponization of AI by firms like Palantir and Google, Thaura.ai represents a burgeoning movement to reclaim technology for liberation, not subjugation. Boycotts like the QuitGPT movement have been spreading across the US and beyond, asking people to cancel their ChatGPT subscriptions and refuse to bankroll authoritarianism.

Worker Organizing: Within the industry itself, internal dissent has crystallized into organized resistance. The No Tech for Apartheid and No Azure For Apartheid campaigns, led by tech workers at Google, Amazon, and Microsoft, have staged walkouts and sit-ins challenging their employers' military contracts. “We refuse to build technology that powers genocide or surveillance,” stated one fired Google engineer. Their courageous whistleblowing has been instrumental in exposing the inner workings of Project Nimbus and Microsoft’s Azure contracts and has sparked a vital internal debate about the ethical responsibility of engineers.

Policy and Legal Challenges: The backlash is also materializing in the halls of power. Legal scholars and human rights organizations are building cases to challenge these contracts under international law, while coalitions of civil society groups are filing formal complaints with regulatory bodies and the United Nations, accusing these tech giants of violating human rights principles. In her report, UN Special Rapporteur Albanese named over 60 companies, including US firms Google, Amazon, IBM, Palantir, and Microsoft, that are profiteering from an economy of occupation and genocide.

The message from the public, from workers, and from a new generation of technologists is unequivocal: technology must serve humanity, not exploit it.

What Socialists Can Do: A Practical Call to Action

The sheer amount of money and power Big Tech corporations are willing to provide to facilitate a genocide can be demoralizing. But history offers a clear precedent and a path forward. In the 1980s, a global mobilization forced major corporations to divest from South Africa’s apartheid regime, cutting the economic and technological lifelines that sustained it. 

Today, we face the same moral test. The demand is simple and non-negotiable: Google, Amazon, Microsoft, and tech firms must immediately terminate all contracts that provide technological support for Israel’s military occupation and siege of Gaza. There can be no neutrality when your technology is facilitating genocide. 

The fight against Big Tech’s complicity in genocide and domestic repression demands direct, collective action. Socialists in the DMV and worldwide can help build counter power:

  • Back Tech Worker Organizing: Support the “No Tech for Apartheid” campaign. Amplify workers at Google, Amazon, and Microsoft who are risking their jobs to disrupt military contracts like Project Nimbus. Labor solidarity is our strongest leverage.
  • Migrate to Ethical Platforms: Defund surveillance capitalism. Use and promote ethical alternatives like UpScrolled. Collective digital migration starves Big Tech companies of the data and engagement they profit from and weaponize.
  • Hold Politicians Accountable: Elect policymakers who cannot be bought by tech companies and push for strong data privacy laws, algorithmic accountability, and firewalls preventing government agencies from accessing our data without consent. From local school boards to Congress, demand that candidates reject corporate PAC money.
  • Push for Local Divestment: Demand DC, Virginia, and Maryland legislation that bans public institutions from contracting or investing with companies involved in human rights abuses. Follow the Boycott, Divestment, Sanctions (BDS) model — use municipal power to cut ties with firms like Palantir.
  • Join the Streets: Algorithms can’t be picketed, but corporate headquarters can. Stand with DMV protesters outside tech offices, embassies, and lobbying firms. Physical disruption makes abstract digital injustice impossible to ignore.

This isn’t just about Palestine, or the next genocide afterwards. It is about who controls the technology that shapes our world. Organize, divest, legislate, and occupy. The time for solidarity in action is now. 

Meanwhile, the protests in Washington, DC, amplify the people's rising discontent. Led by Hazami Barmada and local activists, ongoing demonstrations in front of the White House and along Embassy Row are drawing direct lines between ICE tactics in the US and IDF operations in Gaza. Every die-in carries a message: Silicon Valley may control the global narrative, but DC residents still own the streets outside their local offices.

As Barmada said recently: "Advocacy for human rights is an ongoing fight. This demands action from every industry and every person of conscience. We cannot accept a world where children are robbed of safety, education, food, and freedom."

Related Entries