57 pages • 1 hour read
A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
Empire of AI: Dreams and Nightmares of Sam Altman’s OpenAI (2025), by tech journalist Karen Hao, is a critique of the practices of OpenAI, its CEO Sam Altman, and its industry-leading methods of artificial intelligence (AI) development. Hao analyzes how Altman’s drive to be the first to create artificial general intelligence (AGI), or an AI product capable of human-like cognition, has resulted in an industry-wide race to scale up, with devastating environmental and human consequences. What Hao describes as OpenAI’s “imperial model” of AI development exploits labor and environmental resources in the Global South in an effort to fulfill the science fiction dream of an AI system that can replace human intellectual labor. Hao uses her years of experience reporting on OpenAI and questionable practices within the tech sector for MIT Technology Review, Quartz, and other publications to craft a critical narrative of OpenAI’s corporate history and its negative impacts on the world.
This guide uses and references the 2025 Penguin Kindle edition of Empire of AI.
Content Warning: The source material and guide feature depictions of racism, child sexual abuse, gender discrimination, ableism, addiction, mental illness, self-harm, sexual violence, and graphic violence.
In the Prologue, Karen Hao opens with a beat-by-beat description of the attempt by OpenAI’s board to remove Sam Altman from the position of chief executive officer (CEO) of OpenAI due to growing concerns about his “abusive” behavior, persistent dishonesty, and lack of transparency with the board. After announcing his ouster, employees and tech leadership expressed support for Altman, forcing the board to reinstate him as CEO only a few days later. Hao uses this conflict as a window into the personalities shaping the development of AI models within OpenAI. She argues that Altman and OpenAI leadership have sought to increase their programs’ data sources and computing power through centralization and market domination at the expense of marginalized communities, their labor, and their resources. She characterizes these exploitative practices as a form of empire.
In Part 1, Hao describes the origins of AI technology, the organization OpenAI, the backgrounds of its founders, and its approach to AI development. OpenAI was founded in 2015 as a partnership between tech billionaires Sam Altman and Elon Musk, who both shared a concern that a company would create an AGI—an artificial intelligence combining human cognitive capacity with machine efficiency—that would wipe out humanity. They decided to create OpenAI to ensure that they would develop an AGI “aligned” with human values. Hao profiles OpenAI cofounders Greg Brockman and Ilya Sutskever. Brockman is an experienced Silicon Valley tech executive, and Sutskever is a math prodigy who believes in the notion of AGI. Sutskever argued they needed to rapidly scale up the organization’s computing power by purchasing graphics processing units (GPUs). In 2019, Sam Altman joined the company full time. OpenAI, formerly a non-profit organization, restructured to include a for-profit “Limited Partnership” entity to raise investment funds for the capital outlay to buy the GPUs. In Chapter 3, Karen Hao describes her visit to the OpenAI offices in 2019 to “embed” with the company to research a profile she was writing for MIT Technology Review. She was frustrated by the vague answers to her questions about AI development from senior leadership and the lack of access she was given to employees or the work place. In Chapter 4, Hao gives a brief history of the development of AI from the coinage of the term “artificial intelligence” in 1956 to the present. She argues that the current “scaling” paradigm dominant in today’s marketplace is not the only way to develop AI technologies. In Chapter 5, Hao describes OpenAI’s early successes with developing AI models by applying Transformers on a massive scale. She notes that the increasing demands of scale meant that the company began using lower-quality data to create its GPT models.
In Part 2, Hao analyzes the growth and commercialization of OpenAI’s products and the labor exploitation in the Global South, specifically Kenya and Venezuela, used to develop them. Hao describes the 2020 release of OpenAI’s GPT-3, with an application programming interface that would allow developers to use its technology. It was released despite concerns from the OpenAI Safety team about the technology’s potential dangers. Hao describes controversy around AI ethics researcher Timnit Gebru. When Google learned that her paper critiqued their use of natural resources and the potential psychological harms of AI technology, they fired her. Hao describes the increased use of human feedback, known as reinforcement learning from human feedback (RLHF), to train AI models to prevent them from generating harmful images, including depictions of child abuse. Hao profiles data workers in the Global South, particularly Kenya and Venezuela, who train and label AI models, working in precarious positions for extremely low pay. She profiles a Kenyan data worker who suffered severe psychological distress after reviewing violent and sexually explicit content for OpenAI’s third-party contractor.
In Part 3, Hao describes the growing ethical concerns around OpenAI’s growth and the environmental costs of AI datacenters. In Chapter 10, Hao describes the tension between Doomers—who fear AI will cause human extinction if not properly controlled—and Boomers—who feel driven to develop AI at any cost. She covers the release and potential hazards of OpenAI’s image generator DALL-E 2 and GPT-4. Hao covers the release of OpenAI’s widely popular AI chatbot, ChatGPT, and she reports on the environmental impacts of massive datacenters for AI development and support in Chile and Uruguay. She interviews activists working to stop the construction of these data centers due to their enormous water and mineral resource use. Hao reports on how OpenAI and other “big tech” firms successfully lobbied the US government to craft AI regulation shaped around the concerns of Doomers rather than the critiques of marginalized people like artists. Hao reports on the claims of Annie Altman, Sam Altman’s sister, who alleges he abused her emotionally and sexually throughout her life. She compares Annie’s experiences to those of other marginalized groups without the power to hold tech billionaires like Sam Altman accountable for their actions and the impacts of their technologies.
In Part 4, Hao reports on the conflict within OpenAI that led to the board’s attempts to remove Sam Altman as CEO and the ongoing fallout of “The Blip.” She describes how Ilya Sutskever, Chief Technology Officer Mira Murati, and board member Helen Toner became increasingly concerned with Altman’s lack of transparency, tendency to dissimulate, and poor leadership. On Friday, November 17, 2023, the board announced their decision to remove Altman from his post as CEO, generating immediate pushback. Altman was reinstated the following Monday, and the board’s investigation later cleared him of any wrongdoing. Shortly after OpenAI released its newest, most advanced model, GPT-4o, Sutskever left the company along with several of his allies who were concerned with AI “safety.” In 2024, investors insisted that it transform into a for-profit company by the end of 2026. As of 2025, Altman remained at the head of the company.
In the Epilogue, Hao discusses a Māori AI language project called Te Hiku that she argues is an example of a decolonial, decentralized AI project in contrast with the OpenAI “neocolonial” model. She summarizes other efforts to improve the decentralization and democratization of AI led by other researchers in the field, such as Timnit Gebru and Alex Hanna’s Distributed AI Research Institute. She concludes that there should be more oversight and regulation to redistribute AI resources.