Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI

Karen Hao

57 pages 1-hour read

Karen Hao

Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI

Nonfiction | Book | Adult | Published in 2025

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Important Quotes

“This book is not a corporate book. While it tells the inside story of OpenAI, that story is meant to be a prism through which to see far beyond this one company. It is a profile of a scientific ambition turned into an aggressive ideological, money-fueled quest; an examination of its multifaceted and expansive footprint; a meditation on power.”


(
Author’s Note
, Page xii)

In Empire of AI, Hao uses OpenAI as a paradigmatic example of the Silicon Valley’s harmful approach to AI development. She argues that it is not “a corporate book,” by which she means a hagiographic or uncritical history of a tech corporation. She is clear that Empire of AI is a work of critique whose scope extends beyond the corporation at its center.

“Musk and Altman, who had until then both taken more hands-off approaches as cochairmen, each tried to install himself as CEO. Altman won out. Musk left the organization in early 2018 and took his money with him. In hindsight, the rift was the first major sign that OpenAI was not in fact an altruistic project but rather one of ego.”


(Prologue, Page 13)

Hao focuses on the personalities involved in the creation of OpenAI and its products, most notably Altman, Musk, and cofounders Brockman and Sutskever. In this quote, she signals that she uses their personalities and conflicts as a microcosm of the larger ethical issues of AI development.

“Under the hood, generative AI models are monstrosities, built from consuming previously unfathomable amounts of data, labor, computing power, and natural resources […] Over the years, I’ve found only one metaphor that encapsulates the nature of what these AI power players are: empires.”


(Prologue, Page 16)

Hao presents Research-Driven AI Expansion as Neocolonialism. She uses the term “empire” as a “metaphor” for the imperial processes required to build generative AI models. The use of the first-person pronoun “I” highlights that this analysis is in part a subjective critique.

“Each of these puzzle pieces—Sam’s ascendence, his character and relationships, the divisiveness he left in his wake, the flows of money and power—speaks to the path that led to his sudden and fleeting ouster. For a brief moment, the rest of the world caught a glimpse into the struggles happening at the highest levels to dictate the future of artificial intelligence. It would reveal just how much the quest for dominance of that technology—already restructuring society and terraforming our earth—ultimately rests on the polarized values, clashing egos, and messy humanity of a small handful of fallible people.”


(Part 1, Chapter 1, Page 45)

During the imperial age, kings, government leaders, and oligarchs like Cecil Rhodes drove the expansion and retrenchment of colonial exploitation and control. Within the “empire of AI,” Hao argues that Altman and other tech leaders act in a similar fashion to establish their primacy, power, and wealth. Their conflicts and personal ideologies drive the movement.

“In its relentless pursuit of commercial products and AGI, the AI industry had produced expansive negative side effects, including the wide-scale infringement of privacy to train facial recognition and the spiraling environmental costs of the data centers required to support the technology’s development.”


(Part 1, Chapter 2, Page 57)

Hao sees the push to commercialize AI products and develop AGI as the source of the “negative side effects” of the technology. Her emphasis here on these particular aspects of the technology is in keeping with her feeling that AI is not inherently harmful, but has been developed in a harmful way.

“What was clear was that OpenAI was beginning to exert meaningful sway over AI research and the way policymakers were learning to understand the technology. The lab’s decision to revamp itself into a partially for-profit business would have ripple effects across its spheres of influence in industry and government.”


(Part 1, Chapter 3, Page 75)

Hao critiques how OpenAI has been able to centralize industry research into AI, foreclosing alternative pathways of development. She traces its path to dominance as starting with its partial restructuring from a nonprofit to a for-profit model.

“It was this fundamental assumption—the need to be first or perish—that set in motion all of OpenAI’s actions and their far-reaching consequences. It put a ticking clock on each of OpenAI’s research advancements, based not on the timescale of careful deliberation but on the relentless pace required to cross the finish line before anyone else. It justified OpenAI’s consumption of an unfathomable amount of resources: both compute, regardless of its impact on the environment; and data, the amassing of which couldn’t be slowed by getting consent or abiding by regulations.”


(Part 1, Chapter 3, Page 84)

Hao outlines the permission structure, or the decision-making framework, that OpenAI leadership and employees used to justify their arguably unethical actions. She cites “the need to be first or perish,” a common mantra within Silicon Valley, as a key aspect of this permission structure. This dictum echoes the circumstances underlying the Manhattan Project—the development of the atom bomb during World War II—a self-serving analogy that both Altman and Musk use to justify their AI expansionism.

“The name artificial intelligence was thus a marketing tool from the very beginning, the promise of what the technology could bring embedded within it.”


(Part 1, Chapter 4 , Page 90)

Hao critiques “the pervasive use of abstract, detached language to sanitize and normalize” the AI field’s practices (101). She traces the origins of this practice back to the development of the term “artificial intelligence” itself, as “intelligence” is a vaguely defined, practically unquantifiable metric.

“Under the specter of AGI’s unstoppable arrival, the company needed to keep developing more and more powerful models to prepare itself and to prepare society. Even if those models carried with them their own risks, the experience they offered to prevent or face possible AI apocalypse made those risks bearable.”


(Part 1, Chapter 5, Pages 131-132)

Hao here explains the seemingly paradoxical secular religious belief to which many researchers in OpenAI adhered. Even though they believe AGI could have apocalyptic effects, they believe in continuing to develop AI models in order to prepare humanity for that eventuality.

“That moment also became far bigger than Gebru or Google itself. It became a symbol of the intersecting challenges that plagued the AI industry. It was a warning that Big AI was increasingly going the way of Big Tobacco, as two researchers put it, distorting and censoring critical scholarship against the interests of the public to escape scrutiny. It highlighted myriad other issues, including the complete concentration of talent, resources, and technologies in for-profit environments that allowed companies to act so audaciously because they knew they had little chance of being fact-checked independently; the continued abysmal lack of diversity within the spaces that had the most power to control these technologies; and the lack of employee protections against forceful and sudden retaliation if they tried to speak out about unethical corporate practices.”


(Part 2, Chapter 7, Page 170)

Hao compares the practices of big tech firms to those of big tobacco. For decades, tobacco companies paid scientific and medical researchers to hide or obscure the harms of tobacco use, going so far as to advertise in premier medical journals. In comparing big tech firms to big tobacco, Hao is implying that they hide or obscure the harms of their products in a similar fashion. This comparison illustrates The Need for Accountability in Big Tech.

“‘It was sad to me that we deployed this API with our mission of benefiting humanity, and everyone had such positive impressions about how we had users saving time on customer service or whatever,’ one former OpenAI employee says, ‘but in reality, a lot of our traffic was going to AI Dungeon child sexual content and a creepy AI girlfriend product.’”


(Part 2, Chapter 8, Page 181)

Hao had initially believed in OpenAI’s mission, but she had become disillusioned as she learned more about the company. This arc is echoed in this quote from a former OpenAI employee who believed in the mission until they realized that the company was not upholding its stated goals, illustrating the importance of Redefining AI Safety Around Present-Day Harms.

“Just as the first era of AI commercialization laid the groundwork for the generative AI era’s amassing of data and capitalization of compute, so, too, did it create the foundations for its wide-scale labor exploitation.”


(Part 2, Chapter 9, Page 194)

Hao briefly notes that many of the exploitative practices she documents in generative AI development have their roots in earlier practices of big tech companies, such as the use of Amazon’s Mechanical Turk to pay people in the Global South pennies to label data for self-driving car algorithms. These “foundations” are industry-wide and illustrate how exploitative practices go beyond OpenAI.

“To live in San Francisco and work in tech is to confront daily the cognitive dissonance between the future and the present, between narrative and reality.”


(Part 3, Chapter 10, Page 227)

Although Hao often writes in a clear, technical narrative form throughout Empire of AI, occasionally she uses figurative language to illustrate her points. Here she uses the parallel structure of “future and present” and “narrative and reality” to intimate that the narrative of the future espoused by the tech industry is vastly different from people’s present-day lived realities.

“But where with Applied, this was reason to sustain its intensity to prepare for launch while keeping GPT-4’s capabilities a secret for as long as possible, with Safety, Altman used it to continue underscoring his caution. ‘My number one safety concern is acceleration risk,’ he said, adopting their vocabulary.”


(Part 3, Chapter 10, Page 249)

This passage exemplifies Hao’s characterization of Altman throughout the work. She notes that Altman will say different things to different groups of people to get them to do what he wants. In this case, he “adopts” the language of AI “safety” to get the AI Safety team within OpenAI to agree to his efforts to accelerate commercial product development.

“The central question these movements are asking is how to imagine a different path for AI development not rooted in extraction, he says. ‘If we are going to develop this technology in the same way that we used to, we are going to devastate the earth.’”


(Part 3, Chapter 13, Page 274)

Hao quotes AI researcher and professor Martín Tironi Rodó’s thoughts on the neocolonial and exploitative aspects of the dominant paradigm of AI development. Like Hao, Rodó believes it is possible to build a “different path for AI.”

“It’s very clear that the AI industry today is rooted in a colonial ideology, he says: It imposes its worldview and its technology—what is AI, what is good AI, what it means to create an industry of AI—on the rest of the world.”


(Part 3, Chapter 12, Page 300)

Tironi, like Hao, critiques the hegemony that Silicon Valley companies have over AI development. As an AI researcher in Chile, part of the Global South, he believes that those on the margins should have a greater say in how the technology is created and utilized.

“At times she has been consumed by a sinking feeling that no matter how much she speaks up, the world is somehow in a conspiracy against her. It’s the same loss of agency and anger I’ve seen etched on the faces of people globally when they throw so much of the little they have at challenging the empires’ narratives, and then watch as the people they are up against wield the kind of power that can deploy billions of dollars in capital, construct vast infrastructure, hire and fire tens of thousands of contractors, and, with a few soft-spoken words—at an event, to Congress, to heads of state, to journalists—smooth over the murmurs of protest in the way of their will.”


(Part 3, Chapter 14, Page 340)

Hao connects the powerlessness of Annie Altman in her battle against her tech billionaire brother Sam Altman with the powerlessness of those on the margins in the AI debate, namely those in the Global South who are up against a plethora of forces that are reluctant to hear or address their critiques. She intimates that Annie is a victim of these corporate forces just as environmental or labor activists in the Global South are.

“To Sutskever, the result was the most toxic combination: a directionless, chaotic, and backstabbing environment where people no longer had shared information or a shared foundation of trust to agree on critical decisions about how to move forward. This infighting was undermining what Sutskever saw as the two pillars of OpenAI’s mission: It was slowing down research progress and eroding any chance at making sound AI safety decisions.”


(Part 4, Chapter 15, Page 351)

Hao shares Ilya Sutskever’s beliefs about the toxic corporate environment at OpenAI and how it impeded the mission. Notably, his concerns are not just those typical of disgruntled employees, e.g., “a directionless, chaotic, and backstabbing environment,” but are tied to his secular religious beliefs about the moral imperative to create an “aligned” AGI as quickly as possible.

“A fear Sutskever had articulated resonated with them: What did it mean that OpenAI was trying to build AGI when its senior leadership couldn’t trust either basic or critical information coming from the CEO?”


(Part 4, Chapter 16, Page 362)

Hao paraphrases Mira Murati’s concerns about Altman’s tendency to dissimulate and withhold information and how it could negatively impact the development of AI technology by OpenAI. These concerns were shared by the three independent members of the board, which is what spurred their attempts to remove Altman as CEO.

“Science is a process of consensus building. The significance of any advance—whether in AI or otherwise—tends to be highly subjective the moment that it happens. Only through peer review, the test of time, and sustained impact does a particular advance become elevated to ‘a breakthrough.’ With OpenAI performing its work in secrecy—and the rest of the industry now following—the ‘breakthrough’ label could really only be treated as a matter of the company’s opinion.”


(Part 4, Chapter 16, Page 374)

This is one of the clearest examples of Hao’s methods of argumentation in Empire of AI. She boldly states her initial claim: “science is a process of consensus building” and then provides examples of how OpenAI’s practices do not amount to true science because they do not share their findings for external verification. As a computer scientist herself, Hao has strong feelings about the correct way to conduct scientific research.

“It feels to me like we just stumbled on a new fact of nature or science or whatever you want to call it, which is, like, we can create, you can—I don’t believe this literally but it’s like a spiritual point—intelligence is just this emergent property of matter and that’s like a rule of physics or something.”


(Part 4, Chapter 17, Page 383)

This quote from Altman is indicative of the kind of vague, techno-jargon-riddled hype that Hao accuses the CEO of using to obscure his company’s processes. He also alludes to the secular religious beliefs of many in Silicon Valley when he states as a “spiritual point” that “intelligence is just this emergent property of matter.” Hao criticizes him for using mystification—taking advantage of the fact that laypeople often fail to understand how AI works—to promote his own products, secure investment, and avoid regulation.

“Six years after my initial skepticism about OpenAI’s altruism, I’ve come to firmly believe that OpenAI’s mission—to ensure AGI benefits all of humanity—may have begun as a sincere stroke of idealism, but it has since become a uniquely potent formula for consolidating resources and constructing an empire-esque power structure.”


(Part 4, Chapter 18, Page 400)

This statement is a clear summation of the argument Hao makes throughout Empire of AI. It emphasizes the neocolonial structures OpenAI uses to accomplish its mission of creating AGI.

“‘Data is the last frontier of colonization,’ Mahelona told me: The empires of old seized land from Indigenous communities and then forced them to buy it back, with new restrictive terms and services, if they wanted to regain ownership. ‘AI is just a land grab all over again. Big Tech likes to collect your data more or less for free—to build whatever they want to, whatever their endgame is—and then turn it around and sell it back to you as a service.’”


(Epilogue, Page 412)

Hao quotes Keoni Mahelona, a native Hawaiian and AI researcher who has led the development of the Te Hiku AI project for preserving and promoting the te reo Māori language. He explicitly describes the exploitative practices of companies like OpenAI in neocolonial terms. He is one of many people from marginalized communities who critique the imperial AI model quoted throughout Empire of AI.

“The critiques that I lay out in this book of OpenAI’s and Silicon Valley’s broader vision are not by any means meant to dismiss AI in its entirety. What I reject is the dangerous notion that broad benefit from AI can only be derived from—indeed, will ever emerge from—a vision for the technology that requires the complete capitulation of our privacy, our agency, and our worth, including the value of our labor and art, toward an ultimately imperial centralization project.”


(Epilogue, Page 413)

It is only toward the end of the book that Hao lays out her own views on AI technology. In the Epilogue, she makes clear that she does not see the technology as inherently bad or exploitative, only the way it has been developed by companies like OpenAI. She uses strong language in describing the dominant AI movement as “dangerous.”

“The antidote to the mysticism and mirage of AI hype is to teach people about how AI works, about its strengths and shortcomings, about the systems that shape its development, about the worldviews and fallibility of the people and companies developing these technologies. As Joseph Weizenbaum, MIT professor and inventor of the ELIZA chatbot, said in the 1960s, ‘Once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away.’”


(Epilogue, Pages 420-421)

Joseph Weizenbaum was an early inventor of AI technologies who spent much of the latter part of his career warning about the dangers of chatbots and AI more generally. In citing him at the end of Empire of AI, Hao connects her very contemporary critique with his historical one and inscribes her work within a longer lineage of criticisms about the technology.

blurred text
blurred text
blurred text

Unlock every key quote and its meaning

Get 25 quotes with page numbers and clear analysis to help you reference, write, and discuss with confidence.

  • Cite quotes accurately with exact page numbers
  • Understand what each quote really means
  • Strengthen your analysis in essays or discussions