Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI

Karen Hao

57 pages 1-hour read

Karen Hao

Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI

Nonfiction | Book | Adult | Published in 2025

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Themes

Resource-Driven AI Expansion as Neocolonialism

Hao’s core argument in Empire of AI is that OpenAI and other big tech companies pursue AI technology as a form of neocolonialism. OpenAI established the Western model of AI development, in which companies race to gobble up as much data as possible in pursuit of the possibly chimerical goal of AGI. Drawing on the work of Stanford AI researcher Ria Kalluri, Hao argues that to achieve the “scale” required to stay ahead of its competitors, OpenAI has controlled “knowledge, resources, and influence” (418) in a way similar to historical colonial models of exploitation.


Hao illustrates how OpenAI has shaped AI research through its intense focus on scale and AGI, leading to the neglect of other subject areas like efficacy and bias across the field. The company’s allocation of computing power is a clear example of how it uses its resources to push its interests and agenda. Hao notes that OpenAI had a series of projects dedicated to improving the efficiency of its models. However, “the project ate up significant computational resources” so it was “scrapped” in favor of the continuing development of commercial products based on scale (269). She also argues that OpenAI’s concentrated resources shape the research field, as universities are unable to compete with the scale of information OpenAI controls, leading academic researchers to leave academia for to the private sector and “atrophying independent academic research” (133).


Hao is particularly critical of the amount of human and material resources controlled by OpenAI and other big tech firms and how they acquire those resources. AI development requires human feedback to label its data and train its models. Rather than pay people a living wage, OpenAI outsources the work to third-party contractors who operate with little oversight in Global South countries where workers have fewer labor protections. Hao documents how those who work on reviewing traumatic materials can develop serious psychological issues without any care from the companies they work for. The need for processing power also leads companies to develop enormous, environmentally harmful data centers around the world. These data centers, predominantly built in lower-income countries in the Global South, use incredible amounts of potable water and mineral resources like copper and lithium. Hao interviews activists who describe the exploitation of Global South resources by big tech firms as a form of neocolonialism.


OpenAI and the big tech firms use their competitive advantage and economies of scale to lobby governments on their behalf. Critics of AI, who have comparatively fewer resources, struggle to get the government to listen to their concerns. A key example provided by Hao is the influence OpenAI had over Biden’s executive order “regulating” AI. During Sam Altman’s lobbying blitz, Hollywood workers struggled to get government officials to listen to their concerns about how AI would impact their industry. Through these techniques and others, Hao argues “OpenAI is now leading our acceleration toward this modern-day colonial world order” (16).

The Need for Accountability in Big Tech

A common motto of Silicon Valley start-ups is “move fast and break things” (240). Hao argues that Altman and other big tech leaders use this permission structure to justify risky, unethical, or even illegal decisions in pursuit of their development goals. Tech leaders believe “startups could and should move into legal gray areas […] to disrupt or revolutionize industries” (135). Hao critiques this approach to AI tech development. She illustrates how tech leaders, particularly Altman, evade oversight and accountability, and she calls for greater transparency in the field. A core element of Hao’s narrative is documenting how OpenAI under Altman’s leadership abandoned the “openness” value implied by its name to instead act with secrecy and impunity to gather data, hide scientific research from the public, and override concerns of AI “safety” researchers within the company.


As part of its scaling ethos, OpenAI needed to harvest ever greater amounts of data to train its GPT models. Hao notes that the field relies on “mass scraping and extraction” without considering, for example, “authors and artists who stand in opposition” to their work being used in this way without their consent (101). OpenAI cofounder Greg Brockman oversaw the scraping of YouTube videos in a violation of the platform’s terms of service. Litigation is pending in the United States and elsewhere to contest the scraping of copyrighted materials.


Internal oversight of the field has largely been unsuccessful. Hao illustrates how Altman uses his wealth and connections to avoid oversight and accountability, including by other members of the tech industry. She notes that “it’s hard to find people within Altman’s inner circle who don’t have some kind of financial relationship with him” (41). When people “disagreed with or challenged him,” he would “ice” them out (346). In Hao’s reporting, this dynamic makes it difficult for the OpenAI board or other executives to hold him accountable for his dissembling and withholding. Hao reports in detail how the OpenAI board was unable to oust Altman from his leadership position due to the support and connections he had within and outside the company.


Government oversight of AI development is similarly lacking. Hao interviews AI researcher Deborah Raji who notes that lobbyists from Big Tech and OpenAI “had monopolized the message in Washington for so long that many policymakers now viewed it as gospel” (311). This dynamic has allowed OpenAI and others to effectively write their own government regulation, allowing the sector to appear regulated without addressing the larger ethical and labor issues of the technology.


Hao argues that there need to be stronger labor protections for data workers, greater transparency about AI company’s data collection efforts and supply chains, and more environmental regulation of data center construction to address the issues raised throughout Empire of AI.

Redefining AI Safety Around Present-Day Harms

There are two paradigms of debate around the ethics of AI development. In Silicon Valley, many AI developers are primarily concerned with the potential of an AGI system becoming self-aware and causing human extinction. In Empire of AI, Karen Hao explores smaller-scale, present-day ethical concerns with AI development, including labor and resource exploitation. She argues that big tech firms should shift their focus from “the theoretical rogue AI harms of Doomerism [to] the existing real-world harms, from discrimination to misinformation to job automation” (419).


Tech leaders, including OpenAI cofounder’s Ilya Sutskever and Elon Musk, are major proponents of the belief that an AGI could lead to either a utopic or an apocalyptic future. Hao reports on these beliefs, noting that, for instance, Musk “described AI the ‘biggest existential threat’ to humanity and its development as ‘summoning the demon’” (24). Sutskever encourages these beliefs about the need for AI “safety” and “alignment” within OpenAI. Hao reports on a dramatic ceremony in which Sutskever burned an effigy that “represented a good, aligned AGI that OpenAI had built, only to discover it was actually lying and deceitful. OpenAI’s duty, he said, was to destroy it” (255). The AI Safety team within OpenAI was primarily concerned with this set of ethical issues around AI development.


Hao argues that this quasi-religious set of eschatological concerns distracts from the real-world ethical harms she documents in Empire of AI. These include, but are not limited to, the use of AI to create violent or sexually explicit materials including CSAM, labor abuses of data workers in the Global South, ecological disasters due to the immense amount of water and mineral resources required for database development, and the illegal or unethical data scrapes of materials created by human artists and writers for use in generative AI systems.


Hao uses the example of Timnit Gebru’s firing from Google’s AI Ethics team to illustrate the difficulty of getting big tech firms to take these real-world ethical concerns seriously. Gebru coauthored a paper entitled “On the Dangers of Stochastic Parrots” identifying key limitations and dangers of large language models. The term “stochastic parrots” implies that LLMs operate through mere guesswork (the meaning of the ancient Greek word stokhastikos), mimicking human thought without understanding the underlying concepts, like parrots. The paper argues that because of this lack of understanding, LLMs have the potential to reproduce and amplify dangerous cultural biases. When Google found out about Gebru’s paper, they pressured her to retract it. When she refused, they fired her. Hao criticizes Google’s decision as a sign that they were not capable of “developing a capacity for self-reflection” (168).


Hao suggests that there are more ethical ways to build AI models that are decentralized and allow people to control how their data is used. She cites the Te Hiku project to revitalize the te reo Māori language through Indigenous control as one such for example. She sees this “small, specialized model that excels at one thing” (412) as a way to address the many ethical concerns about AI development described in Empire of AI.

blurred text
blurred text
blurred text

Unlock every key theme and why it matters

Get in-depth breakdowns of the book’s main ideas and how they connect and evolve.

  • Explore how themes develop throughout the text
  • Connect themes to characters, events, and symbols
  • Support essays and discussions with thematic evidence