57 pages 1-hour read

Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI

Nonfiction | Book | Adult | Published in 2025

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Part 4-EpilogueChapter Summaries & Analyses

Part 4, Chapter 15 Summary: “The Gambit”

After the release of the New York magazine profile on Sam Altman, OpenAI board member Helen Toner met with Chief Technology Officer Mira Murati. Murati had worked for three years at Tesla before joining OpenAI. She was well respected for her problem-solving abilities and her history of “cleaning up” Altman’s “messes.” He became even more difficult to work with after ChatGPT’s release made him into a public figure. He continued to tell people what they wanted to hear in the moment, sometimes lying or withholding information in the process, then cutting people off when they challenged him. For instance, rather than mediating a dispute between Sutskever and researcher Jakub Pachocki, Altman privately took Pachocki’s side without communicating his feelings to Sutskever. This resulted in each of their teams working on parallel or intersecting projects without clear direction. Murati confronted Altman about these and similar leadership issues. Altman was defensive and iced Murati out, convinced she was reporting these issues to the board. Murati met with Toner in September 2023 to discuss these issues. She also expressed her concerns that OpenAI was shipping products without sufficient testing. Murati then followed up with Toner and encouraged the board to find an independent director who was not an Altman ally.


A few days later, Toner got an email from Sutskever expressing similar concerns. Sutskever had decided to reach out to Toner following Annie’s allegations of abuse because “abuse” was “also the word Sutskever felt best captured his own observations of Altman” (352). Because Altman’s form of abuse was subtle, it took Sutskever and others a long time to learn to recognize it. Sutskever wanted a shakeup of the board. He suggested removing Altman as CEO and replacing him with Murati. Murati then relayed to Toner that Brockman, like Altman, resisted oversight, and she wished she could fire Brockman. Both Sutskever and Murati felt that Altman was not a good leader.


On October 25, Toner met with Altman at his request. He was concerned that a paper she had coauthored included criticism of OpenAI. Altman then relayed to Sutskever that McCauley, another board member, had agreed with Altman that Toner should be fired due to the critical paper. When Sutskever asked McCauley about this claim, she categorically denied ever suggesting Toner be fired from the board. Altman had lied.

Part 4, Chapter 16 Summary: “Cloak-and-Dagger”

On October 31, 2023, Toner, McCauley, and D’Angelo met to discuss next steps. They resolved to fire Altman and replace him with Murati as interim CEO. They conducted an investigation and learned that Altman routinely lied or dissembled and that several employees “described Altman’s behaviors as abuse and manipulation” (361). They also felt he did not have the experience to lead a “mature” and growing company. On November 16, Murati confirmed her support for their plan.


On Friday, November 17, the board publicly announced their plan. They were shocked by the immediate pushback from OpenAI employees and leadership. Murati declined to publicly support the coup attempt. Sutskever also folded in the face of ongoing pressure, feeling that continued criticism would result in the end of OpenAI. Elon Musk weighed in by sharing an open letter from, presumably, former members of the Safety team who felt that Altman and Brockman did not take AI “safety” seriously. By Monday, the board recognized that it had lost, and Altman was reinstated as CEO. The Safety team felt “betrayed by the board” (372).


Two weeks later, Altman held an all-hands meeting to discuss a project of Sutskever’s, who was notably absent from the meeting. They had developed an algorithm called Q* that was capable of more efficient inferences than traditional AI models. Although OpenAI described Q* as a “breakthrough,” it refused to release any of its data for testing.


On March 8, 2024, the board’s investigation into Altman ended with the conclusion he had not done anything that warranted his firing.

Part 4, Chapter 17 Summary: “Reckoning”

After Altman’s reinstatement, the Doomers grew increasingly concerned about OpenAI’s plans to “create an AI chip company” (377) and a digital assistant modeled after the Scarlett Johannsen character in the Spike Jonze movie Her (2013). In 2023, with a breakthrough in voice capabilities, OpenAI debuted an audio AI system called Omni that used GPT-4o, the newest iteration of the model.


Meanwhile, Altman’s public relations approach became increasingly cutthroat. Hao notes, “if Altman was being brazen and boastful, most likely something wasn’t going well” (383). OpenAI was facing increasing competition from rival AI companies like Anthropic and Google. He was also facing lawsuits from publishers for stealing their text for training data and pressure from Microsoft to earn more money. Employees felt that the company’s growing insularity was antithetical to its original goal of benefiting humanity.


After the GPT-4o demonstration, OpenAI announced that Sutskever was leaving the company. Soon after, the leader of the Superalignment project, Jan Leike, left to join Anthropic. Leike’s departure was seen as the continued minimization of “safety” within OpenAI. On May 17, an EA-affiliated Vox journalist, Kelsey Piper, reported that researcher Daniel Kokotajlo, who had worked in Safety, had been pressured to sign an non-disclosure agreement (NDA) when he left the company or face losing his equity. The documents even stated that if he spoke out after leaving OpenAI, the company would “claw back” his vested equity. Learning about this clause prompted employee outrage. On May 20, actor Scarlett Johansson sued OpenAI for using her voice for the GPT-4o system. In a meeting on May 22, OpenAI leadership claimed, unconvincingly, that it was unaware of the “claw back” clause and assured employees they were going to fix it. As the controversies mounted, Murati, Brockman, and Pachocki begged Sutskever to return to the company, but the next day they retracted their offer. Instead of reflecting during this moment of “Omnicrisis,” Hao argues that OpenAI doubled down and expected it all to blow over.

Part 4, Chapter 18 Summary: “A Formula for Empire”

Hao argues that Altman perverts values like openness in the furtherance of his “empire.” She argues that he does this by “centralizing talent” around a vision, centralizing resources while overriding regulation, and keeping the mission “vague” so it can be directed in whatever way the “centralizer” desires. For instance, the meaning of AGI keeps changing depending on the needs of the company.


In May 2024, pressure for greater transparency within OpenAI and the AI industry more broadly continued. Tenured executives, including Murati, began to leave the company. Sutskever started his own company, Safe Superintelligence. OpenAI’s scaling efforts hit a wall. In October 2024, OpenAI’s investors instituted a new caveat that they “could demand their money back if the company did not convert into a for-profit in two years” (405). At the end of 2024, OpenAI announced they would convert into a “for-profit public benefit corporation” with an attached non-profit arm with shares in the for-profit. Altman remained publicly confident about the future of OpenAI and its development of “superintelligence.”

Epilogue Summary: “How the Empire Falls”

In the Epilogue, Hao describes a Māori AI language project she argues is an example of an alternative to OpenAI’s “imperial” form of AI. The te reo Māori language is considered endangered due to decades of its repression by English colonialists in New Zealand. Indigenous couple Peter-Lucas Jones and Keoni Mahelona turned to AI to help preserve and promote the te reo language. They used recordings from Te Hiku radio, a Māori radio station, and made new recordings of Māori elders to capture as much of the language as they could. Then, with the consent of those they had recorded, they trained a limited AI on that dataset to be used to teach and promote the language. They have an agreement to only share the model with pre-vetted licensed vendors which will benefit Māori people, and they run the model on local databases. It is a “small, specialized model that excels at one thing” in contrast with OpenAI models which seek to “[hoover] up as much data as possible” and centralize it (412).


Hao argues that AI need not be built along “imperial” lines like OpenAI; rather, that it can be built on a smaller scale that benefits people without exploiting them. Google critic Timnit Gebru and her coauthor Alex Hanna have continued to work on decentralizing AI through their Distributed AI Research Institute (DAIR). In Kenya, data workers like Mophat Okinyi continue to advocate for better labor rights for data workers in the developing world. In Uruguay, activists like Daniel Pena continue to fight against data centers that exploit the local natural resources. Hao presents these as examples of anti-imperial approaches to AI. She draws on queer AI researcher Ria Kalluri’s framework of challenging centralization of knowledge, resources, and influence throughout the AI sector. She argues that the focus should not be on “the theoretical rogue AI harms of Doomerism, but the existing real-world harms” (419).


Hao argues that there should be more transparency into how AI works, better labor protections for AI workers and those whose creative works are used as data, and better education to demystify AI.

Part 4-Epilogue Analysis

Part 4 of Empire of AI is primarily an account of corporate intrigue, offering greater detail about the attempt to oust Sam Altman as CEO of OpenAI discussed briefly in the Prologue. This episode is the key illustration of The Need for Accountability in Big Tech: As a result of the cult of personality fostered by Sam Altman, not even the board is capable of overseeing his actions, leading to a lack of accountability throughout the organization.


As is typical of trade books which present an argument, Hao concludes in her Epilogue with a description of solutions and alternatives to the problems of AI she has identified throughout. She focuses on the Te Hiku project as an example of an anti-colonial and non-exploitative mode of AI development. Through this example, Hao argues that AI can be developed to benefit rather than harm marginalized communities. However, she does not acknowledge statements by critics like Māori technology ethicist Karaitiana Taiuru who notes that the very creation of a te reo Māori voice database runs counter to their cultural practices as “our traditional stories warn us of such recordings of the voice” (“Can Te Reo Māori Be Digitally Colonised?Te Kete o Karaitiana Taiuru, 2018). Hao is supportive of AI technology broadly, noting that her criticism is reserved to the form of AI that is “an ultimately imperial centralization project” (413). Thus, she does not consider critics of AI technology as a whole, whether by local activists or contemporary neo-Luddites like Brian Merchant, author of Blood in the Machine (2023).


Hao ends with a description of the ongoing efforts of activists to regulate and reform AI practices. She largely focuses on the actions of the marginalized, including people in the Global South, queer people, and Black female activists like Timnit Gebru. This selection suggests that she believes the remedy to the imperial model of AI will come from those currently marginalized by the system as they “redistribute” power away from those like Sam Altman who currently hold it. It ends on a clear and optimistic “call to action”: “May it be a new ground upon which many more after will rise up and build” (420). This places her book within the growing body of work criticizing OpenAI specifically and Silicon Valley’s AI project more generally.

blurred text
blurred text
blurred text

Unlock all 57 pages of this Study Guide

Get in-depth, chapter-by-chapter summaries and analysis from our literary experts.

  • Grasp challenging concepts with clear, comprehensive explanations
  • Revisit key plot points and ideas without rereading the book
  • Share impressive insights in classes and book clubs