57 pages 1-hour read

Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI

Nonfiction | Book | Adult | Published in 2025

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Part 2Chapter Summaries & Analyses

Content Warning: This section of the guide includes discussion of child sexual abuse, mental illness, racism, gender discrimination, sexual violence, self-harm, and graphic violence.

Part 2, Chapter 6 Summary: “Ascension”

In March 2019, Sam Altman left YC and began to work at OpenAI. He brought with him a belief that startups had to aggressively outperform their competitors to succeed. He pushed for OpenAI to scale up operations rapidly to create AGI while building on their partnership with Microsoft. Tensions grew within the company. Amodei and other AI “safety” people felt that the OpenAI was not doing its due diligence to ensure its bigger, newer models would not harm humanity in its push to scale up quickly. Meanwhile, Sam Altman grew increasingly concerned about corporate espionage and reduced the accessibility of the company’s data.


OpenAI developed an application programming interface (API) that would allow companies and developers to use GPT-3 for their own products without giving them access to the “model weights,” or the parameters GPT-3 used to process and collate data. The AI Safety team worried that this product was particularly dangerous because the API for GPT-3 was especially good at code generation; they worried that if the program could write effective code, it could rewrite its own code and override human control. Nevertheless, the program was released June 2020.


Once GPT-3 was released, the Safety team continued to advocate for better control and parameters over the product, but they were largely overruled. Feeling he was unable to work effectively to prevent the AI apocalypse from within OpenAI, Dario Amodei, the head of Safety, left to form his own AI company, Anthropic.

Part 2, Chapter 7 Summary: “Science in Captivity”

The release of GPT-3 API and a recognition of its capabilities sparked a rush of interest from large tech companies in developing large language models (LLMs). Google, Facebook/Meta, and Chinese tech firms like Huawei all began to work on similar programs.


Criticism of AI was beginning to grow. In June 2019, PhD student Emma Strubell coauthored a paper documenting the massive amount of electrical energy required to train LLMs. Training transformers like GPT-3 need massive amounts of processing power, resulting in enormous carbon footprints. Critical AI researcher Timnit Gebru’s nonprofit Black in AI released research showing the social justice issues with AI programs, such as findings that AI facial analysis software disproportionately failed to identify people of color.


Gebru had joined Google’s AI ethics team in 2018. In 2020, she raised concerns at Google that GPT-3 “entrench[ed] stereotypes related to gender, race, and religion” (162), due in part to the fact that its dataset was scraped from websites like Reddit known to be hotbeds of racist, sexist behavior. Gebru decided to partner with computational linguistics professor Emily Bender to write a paper about the ethical issues of LLMs. The landmark paper is called “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? [parrot emoji].” It identified four key categories of ethical issues: environmental impacts of energy consumption, rampant data collection, lack of transparency into data sets and weights, and the risks of anthropomorphizing and misinformation. However, before Gebru could present the paper at a conference, Google’s Megan Kacholia directed her to retract the paper. Gebru pushed back, offering to instead make changes if Google had concerns. In response, Google fired her immediately. Gebru’s firing sparked concerns about Google’s commitment to AI ethics, diversity, and transparency. Hao argues “it was a warning that Big AI was increasingly […] distorting and censoring critical scholarship […] to escape scrutiny” (169). It raised awareness of questions around the future of AI. Google claims that Gebru’s paper distorted data to make its claims and that it was committed to reducing the carbon footprint of its AI models. Despite the debate the moment inspired, transparency among large AI companies continued to decline.

Part 2, Chapter 8 Summary: “Dawn of Commerce”

OpenAI leadership saw the success of GPT-3 API as a sign that they should push to commercialize their scaled-up models. In 2021, they decided to increase processing power with ever more GPUs, improve efficiency, and improve the quality of data using RLHF. They wanted to create an AI “agent” that would be able to semi-autonomously complete tasks like sending an email. The drive for efficiency was due to the recognition that OpenAI was reaching the limit of what could be accomplished with more data. OpenAI used feedback from the API to continue to tweak the model, although there were some missteps, as when Brockman’s brother’s startup, Latitude, used GPT-3 API to generate child sexual abuse material (CSAM). In 2021, OpenAI introduced a code-generating model called Codex in partnership with GitHub and Microsoft, and it became a revenue-generator for Microsoft.


In 2021, Altman created the OpenAI Startup Fund to support other AI companies. This allowed him to profit from new research in the AI field. Although he was not paid a large salary, he benefited from investment arrangements like his stake in YC, which in turn had a stake in OpenAI.

Part 2, Chapter 9 Summary: “Disaster Capitalism”

To address problems like the CSAM issue with GPT-3, OpenAI began to develop automated content moderation programs. To build the program, they needed human workers to review and label reams of sexual, violent, and abusive content they wanted the program to identify. They decided to outsource the work to content reviewers in Kenya through the contractor Sama. Kenyan workers could be paid a fraction of what workers in developed nations would require for the same work, and Kenya had fewer labor protections. Workers were paid “between $1.46 and $3.74 an hour” (192). The work was psychologically devastating for many workers.


Silicon Valley has a long history of exploiting workers in the developing world to do tedious and sometimes psychologically harmful content moderation or labeling tasks. Amazon’s Mechanical Turk, for instance, pays workers pennies to annotate video data for self-driving cars. In 2016, Venezuela’s economy crashed, and many Venezuelans began to work online for contractors like Mechanical Turk. Hao interviews one such worker, Oskarina Fuentes, who had “reoriented her entire life around working for a platform” (197), Appen, only to be ultimately treated as disposable. Fuentes struggled to access her pay from the program, as she was only allowed to withdraw her earnings when she earned $10. The platform was buggy and required workers to remain tethered to the platform even when not working in order to claim tasks. The pay for each task had declined over time. Fuentes tells Hao that she does not mind the work, she just wishes she had better labor protections like fixed hours, a manager, and health care benefits.


A popular data labeling contractor for AI firms is Scale AI. Scale initially recruited primarily in Kenya and the Philippines, because they were low-wage countries with large, educated, English-speaking populations. Venezuela later became a popular recruitment location for Scale. Scale ensured a relative monopoly where it operated by initially paying higher wages than its competitors and then, once the competitors had moved on, lowering its wages.


Hao interviews Mophat Okinyi, a Kenyan data worker who worked on ChatGPT content training through subcontractor Sama. Reviewing pictures and texts of violence, hate speech, self-harm, and other traumatic material for hours every day had a profound effect on Okinyi. He felt “his sanity fraying” (210). In March 2022, Okinyi was suddenly laid off, and Sama’s contract with OpenAI ended. Okinyi continued to suffer from poor mental health. His long-term girlfriend left him due to his depression. When ChatGPT was released in November 2022, his brother, a writer, also lost his job when AI replaced him. Hao argues that these kinds of labor abuses are endemic to the Silicon Valley AI industry as RLHF is necessary for training AI models. She notes that when workers begin demanding higher wages or the market becomes saturated with “scammers,” the third-party contractors like Scale AI close up shop in one country and move elsewhere. For instance, Scale AI ended all operations in Kenya in March 2024 without warning, leaving workers without income or compensation.

Part 2 Analysis

In Part 2, the disconnect between how Silicon Valley conceives of AI ethics and the real-world ethical concerns of the models comes into focus. Hao portrays the future-oriented, theoretical concerns of AI “safety” advocates within Silicon Valley as a fundamentally self-serving mythology. The apocalyptic nightmares of the “Doomers” and the utopian dreams of the “Boomers” serve the same function: They cast the work of Altman and his peers as world-historically urgent. The Manhattan Project analogy used by Altman and Musk to describe their work is especially telling, as it compares AI to the atom bomb, an enormously destructive technology whose creators similarly claimed that they had to develop it lest “bad actors” develop it first. Hao’s central thesis is that all such grandiose claims ought to be treated with skepticism, and she advocates Redefining AI Safety Around Present-Day Harms including environmental destruction, labor abuses, and the spread of misinformation.


Hao compares the culture of AI “safety” to a “secular religion” predicated on utopian/apocalyptic beliefs. As Hao says in an interview, she spoke to “Doomers” who “when they were telling me that AGI could destroy humanity, their voices were quivering with that fear” (Mouran, Cecily. “‘Empire of AI’ Author on OpenAI’s Cult of AGI and Why Sam Altman Tried to Discredit Her Book.” Mashable, 2025). Hao uses an analogy to cast these beliefs as akin to a messianic religion in which AGI plays the role of the long-awaited messiah whose imminent arrival will either save or destroy the world. Even Doomers feel that by involving themselves in the development of the technology, they can forestall the apocalypse by training the model to “align with,” or not harm, humanity. Hao notes that Sam Altman is deft at using these beliefs, which are widespread throughout Silicon Valley, to his own advantage, as when he argues that “it was time to restrict research publications and model deployments” (143) because the model was becoming so advanced it could become dangerous. Whether he truly believed his AI model was dangerous is immaterial; it served as a convenient explanation for withholding data and information to avoid The Need for Accountability in Big Tech.


Hao is less concerned with the apocalypse than with present-day harms caused by large AI systems. In Part 2, she develops her argument about Resource-Driven AI Expansion as Neocolonialism. In Part 1, she interviewed Silicon Valley leaders like Sam Altman, who sit at the heart of the “empire”; now, she turns to interviewing those impacted by its labor practices in the “colonies” of the Global South, “seeking to understand not just the macro trends pressing down on them but the daily textures of their lived realities” (193). In this reporting, Hao was inspired by William Dalrymple’s The Anarchy: The Relentless Rise of the East India Company (2019), a history of the British East India Company and its role in the British colonization and exploitation of India. She portrays OpenAI and other Silicon Valley AI companies as acting in a similar capacity in developing nations like Kenya and Venezuela.


She discusses some of these labor abuses in Chapter 9, “Disaster Capitalism.” Disaster capitalism is a term coined by Naomi Klein in her work The Shock Doctrine (2007), which is cited elsewhere in Empire of AI. Disaster capitalism is a method used by capitalists under neoliberalism to take advantage of crises, either natural disasters or economic shocks, to install exploitative labor practices and/or privatize systems. Hao uses this framework to analyze how third-party data companies like Appen took advantage of the economic crash in Venezuela to find laborers willing to work for pennies. Although Empire of AI putatively is about OpenAI as a uniquely bad actor in the tech sector, Hao documents how other big tech firms rely on similar practices, suggesting it is a systemic issue.

blurred text
blurred text
blurred text

Unlock all 57 pages of this Study Guide

Get in-depth, chapter-by-chapter summaries and analysis from our literary experts.

  • Grasp challenging concepts with clear, comprehensive explanations
  • Revisit key plot points and ideas without rereading the book
  • Share impressive insights in classes and book clubs