Shaheen Khurana is a member of Metro DC DSA and a technologist: centering people, workers, and communities.

THE RAPID EMERGENCE OF ARTIFICIAL INTELLIGENCE in nearly every aspect of everyday life presents socialists with a new front in the fight against capitalist exploitation. Fortunately, a recently published book, Empire of AI, provides a stellar starting point for the road ahead. A stunning critique of OpenAI and CEO Sam Altman’s scale-at-all-costs doctrine, Empire of AI contends that AI’s current trajectory is not driven by collective need but rather the incentives to recklessly scale, regardless of the social, ethical, and environmental costs. The book makes it brutally clear that Silicon Valley’s AI power players are consolidating extraordinary political and economic power, laying waste to the environment, exploiting labor, appropriating data and intellectual property, and systematically undermining democracy. In short, it asserts, AI represents a new form of colonialism — one that must be confronted if democracy itself is to survive.
The book is a deep and rigorous piece of investigative reporting by Karen Hao, a journalist focused on AI research and social impact. Hao has written for MIT Technology Review, The Atlantic, and The Wall Street Journal. Her book is based on over 300 interviews with insiders from OpenAI, Microsoft, Anthropic, Meta, Google, DeepMind, and Scale, supplemented by a trove of internal documents. She spent a significant amount of time on the ground, embedding with communities worldwide, to understand their histories, lives, and experiences grappling with the visceral impacts of AI. Notably, Hao’s research was conducted without the cooperation of OpenAI or Sam Altman.
At over 400 pages, the book is a tome of meticulous research and brilliant storytelling. Capturing its full scope is a challenge, so this review focuses on some of the central, urgent themes that resonated with me.

Hao’s central thesis is stark: the AI industry constitutes a new form of empire, much like the colonial empires of the past. She maintains that there are many features of empires of old that AI power players now check off. They lay claim to resources that are not their own, the creative work of artists and writers, and the personal data of billions of people who put their experiences and observations online — without actually understanding that their online lives could be plundered (without consent) to train AI models. The AI titans exploit labor around the world — contracting workers who they pay very little to do their data annotation and content moderation — to clean, tabulate, and prepare the data for spinning into lucrative AI technologies. They seize and extract resources — land, energy, water — required to house and run massive data centers and supercomputers. And they do it under the guise of a “civilizing mission;” the idea that they're bringing benefit to all of humanity.
When Hao first began covering OpenAI in 2019, she thought they were the good guys. The company's founding mission was not for profit; it was to serve as a responsible check against corporate power and potentially dangerous forces of “rogue AI” — a superintelligence that could escape human control and become an existential threat, as described by Nick Bostrom. But very quickly, OpenAI’s executives decided that they wanted to lead in this space, not merely provide checks and balances, and created a “capped-profit” arm, OpenAI LP, to attract the vast capital and talent needed to compete in the AI race. This represents one of the most consequential pivots in modern tech history.
Hao meticulously charts the company’s evolution from that idealistic origin to what she believes it has now become: an imperial power structure. She describes a system that consolidates resources, centralizes talent, and systematically eliminates roadblocks, regulations, and dissent. This structure, Hao argues, has allowed Altman to pursue AI dominance — no matter the immense human, ethical, and environmental cost.

Hao does more than track the internal power struggles of OpenAI. Throughout the book, she exposes how artificial intelligence has become the ultimate engine of surveillance capitalism — a system where companies collect vast amounts of data about our behaviors, preferences, and interactions, which are then sold, packaged, and repurposed for profit. AI models are not trained merely on “public data” but on the unpaid intellectual labor and personal lives of billions of people — our blog posts, our conversations, our creative work, our most intimate searches. This enormous digital data trove, Hao implies, is then fenced off, privatized, and used to build products that further entrench corporate power. The AI empire’s primary fuel is you, me, us.
Hao documents how AI companies are turning to countries in the Global South for cheap labor, contracting workers that either annotate data for training models, perform content moderation, or converse with the models, upvoting and downvoting answers to slowly “teach” them to give more helpful responses — all for meager wages.
She traveled to Kenya to speak with workers contracted by OpenAI to filter violent hate speech, self-harm, and sexual content from its models. These individuals were left traumatized, with lasting post-traumatic stress disorder that rippled through their families and communities. The human toll of this system, with inadequate procedures to protect workers, is visceral. Based on the consistency of workers’ experiences that Hao has reported, the labor exploitation underpinning the AI industry is systemic.
Hao masterfully explains that the hyperscaling of AI — the race to build ever-larger models and the data centers that power them — comes with staggering environmental costs that are often hidden from public view. Training and running large AI systems requires enormous amounts of electricity and vast quantities of potable water for cooling, turning AI into an extractive industry. Hao highlights how this burden is pushed onto vulnerable communities, pointing to examples in Chile and Uruguay, where data centers and cloud infrastructure have drawn heavily on scarce water resources. In regions already facing drought and water insecurity, AI companies compete with local communities and agriculture for access to water, prioritizing corporate compute over human needs. From a socialist perspective, this shows how hyperscaling AI extends extractive capitalism: private companies profit while environmental damage and resource scarcity are socialized, disproportionately harming working-class and Global South communities.
According to Hao’s reporting, OpenAI and Sam Altman are not just advancing a technology but are actively undermining the foundations of democracy, from local community decisions to the federal regulatory apparatus. She points to how Silicon Valley executives are expertly manipulating the United States’ AI regulatory discussions to steer them away from accountability and entrench their companies’ monopoly. Silicon Valley has played its tried-and-true “What about China?” card to great effect, consistently deploying it to ward off meaningful regulation. Their immense policy teams have so thoroughly dominated the narrative in Washington that many policymakers now accept this logic as gospel. During the US congressional hearings on AI regulation in 2023, Altman explicitly framed the discussion around geopolitical competition. While testifying, he warned that overly restrictive rules on AI development in the United States would “hand the future” to China.
In short, Hao argues, the tech industry is leveraging the US government for its own empire-building ambitions. This reveals a profound and dangerous alliance: the merger of Silicon Valley’s corporate-based empire with the US government’s state-based empire, each striving to use the other to fortify its own dominion.
This dynamic has a devastating human cost. In speaking with affected communities — from artists whose intellectual property was taken to Chilean activists whose fresh water was diverted — Hao found a common experience: a complete loss of agency over their own future. This shared disempowerment, this horizontal harm, is the very mechanism by which the current trajectory of AI threatens democracy.
A recurring theme in the book is Hao’s demystification of AI’s “magic.” She opens Empire of AI with a prescient quote from Joseph Weizenbaum, MIT professor and creator of the 1966 chatbot ELIZA: “machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer” — quite like how AI technology has dazzled us. “But,” he goes on to say, “once a particular program is unmasked, once its inner workings are explained, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible.”
Human psychology naturally leads us to associate intelligence, even consciousness, with anything that appears to speak with us, as with the first chatbot ELIZA and now ChatGPT. Hao argues that OpenAI has strategically fostered the public conflation of its chatbot with Artificial General Intelligence (AGI). In reality, these models are neither magical nor autonomous; they are built and refined with immense, often invisible, human labor and judgment. The analogies to perceived intelligence are often a mirage of anthropomorphism and corporate hype that exaggerate the capabilities of the technology.
Hao contends that, at this stage, AGI is largely rhetorical — a fantastical, all-purpose justification for OpenAI and the tech industry’s relentless pursuit of scale, wealth, and power. As an antidote to this empire-building, she advocates for broad public education: a clear-eyed understanding of how AI works, its true limitations, and the motives of the companies that develop it.
Hao’s reporting presents a deeply concerning view of independent AI research and diversity of thought in the tech industry — or the lack thereof. The race for profit and dominance, crystallized by OpenAI’s dramatic pivot away from a not-for-profit model, has led to a narrowing, homogenization, and silencing of critical research perspectives. Expertise has consolidated within a handful of powerful corporations as a steady exodus of top researchers and PhD graduates moves from academia to industry, lured by astronomical compensation packages. This brain drain accelerates the erosion of a truly independent research ecosystem.
Hao’s proposed remedy is foundational: robust public and philanthropic funding must be directed to support independent research, conduct objective evaluations of corporate models, and explore alternative technological paths. Without such infrastructure, the threat of unchecked corporate control over AI’s development — and, importantly, over the public’s perception of its capabilities and risks — will only continue to grow, entrenching the empire rather than dissolving it.
Empire of AI is a crucial, galvanizing work. It traces how decisions made inside a small number of companies are steadily hardening into systems that shape everyday life, often beyond public scrutiny. However, Hao reminds us, there is nothing inevitable about this path. In her closing chapter, “How the Empire Falls,” she lays out powerful, grassroots examples — such as the use of AI to revitalize te reo Maori, the language of the Maori people in New Zealand — as a model of how AI can be community driven, consensual, and respectful of local context and history. Its application can uplift and strengthen marginalized communities; its governance can be inclusive and democratic. Ultimately, she asks, how do we govern this technology to shift power back to the people?
Hao’s formula for deconstructing empire requires the redistribution of power and touches on three axes: knowledge, resources, and influence.
Broad-based education can serve as an antidote to the mysticism and mirage of AI hype. This means teaching people about how AI works, its strengths and shortcomings, and the worldviews and fallibility of the people and companies developing these technologies.
Transparency and oversight policies for AI models is vital for measuring the impact of AI on the environment, for ensuring independent corporate model evaluators can do their work, and for guaranteeing the real-world safety of corporate systems. That means holding our politicians accountable and electing policymakers that can’t be bought by the tech companies.
Stronger labor protections need to be incorporated into every part of the artificial intelligence industry, not just for data workers directly contracted by corporations but across the board: for all workers at risk of having their outputs co-opted into training data or their jobs being automated away. We must organize tech workers and socialists everywhere — in our unions and in our communities — to build shared power worldwide.
Empire of AI reveals how the chatbots and large language models reshaping our daily lives are more than mere annoyances; they constitute a 21st-century colonialism, a system that perpetuates historical exploitation and directly threatens the rights of the global working class. By unmasking AI as a new form of empire, Hao’s work arms us with the clarity needed to reject its exploitative premise. Her investigation goes beyond diagnosing this new empire. It is a call to action. By exposing the mechanics of extraction, labor exploitation, and political capture, she equips us to see clearly what is being sold as progress and to reject it. If her extensive research finds reason to believe in an alternative, so must we. Our duty, as socialists, remains unchanged: to build collective power, win democratic control over the systems that govern our lives, and end imperialism — “artificial” or otherwise.
The empire has been unmasked. Now, the work of dismantling it begins.