57 pages 1-hour read

Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI

Nonfiction | Book | Adult | Published in 2025

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Background

Cultural Context: AI Critique

For decades, mainstream tech journalism has largely focused on the benefits of new technologies developed by Silicon Valley, such as the invention of the smart phone. Tech entrepreneurs like Elon Musk and Jeff Bezos were lauded for their products and leadership. However, since the introduction of OpenAI’s ChatGPT to the broader public in 2022, concerns about Artificial Intelligence technologies developed in Silicon Valley and their impacts on society and the environment have become increasingly mainstream. The publication of Empire of AI comes at a time of heightened scrutiny around AI and the tech sector more broadly. Hao joins a growing number of advocates critical of AI, and OpenAI specifically, including computer scientist Timnit Gebru (who is featured extensively in Empire of AI), philosopher Émile P. Torres, and journalists Ed Zitron and Paris Marx, amongst others.


Writers, artists, photographers, and other creative professionals critique large AI models, which they allege have stolen their copyrighted materials to generate new texts and images. As of 2024, there were 47 separate lawsuits pending about copyright breach by AI companies, including OpenAI, in the US alone (“Master List: Copyright Lawsuits v. AI Companies in the US.” Chat GPT is Eating the World, 2024). Others, like Timnit Gebru, are concerned about the inherent biases within AI systems—for instance, Black students are more likely to have their work falsely flagged as artificially generated. Environmental activists critique the water and resource use of the large databases required for large language models, and protests have grown across the United States and elsewhere.


Hao focuses on the negative impacts of AI development in the Global South, particularly as relates to the data workers and resource use due to database development. She is also critical of big tech leaders, particularly Sam Altman, for their neocolonial approach to AI development and their inability or unwillingness to address the real-world harms described above.

Ideological Context: Artificial Intelligence, Existential Risk, and “Safety”

As described in Empire of AI, a core belief of Silicon Valley big tech firms is the need for economies of scale, centralization, and pursuit of monopoly typical of large industries across sectors under capitalism. However, many in the Silicon Valley tech sector and in the field of AI development specifically hold to another set of “secular religious” beliefs that researchers Timnit Gebru and Émile Torres have grouped together under the term TESCREALism (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism). What unifies this constellation of “interconnected and overlapping” beliefs is its singular focus on the utopic and apocalyptic possibilities of advanced Artificial Intelligence technologies that don’t yet exist, ignoring more mundane, present-day harms like resource extraction and labor exploitation (Gebru, Timnit, and Émile P. Torres. “The TESCREAL Bundle: Eugenics and the Promise of Utopia Through Artificial General Intelligence.” First Monday, 2024).


Many tech leaders, such as Musk and OpenAI cofounder Ilya Sutskever, believe that the development of advanced artificial intelligence is inevitable. They also believe that an AI technology not developed with a deep understanding of the importance of preserving human life will cause human extinction in the pursuit of its own goals. Thus, they feel it is of existential importance to be the ones who control and direct the development of this “inevitable” technology so as to ensure that if and when it becomes sentient and self-aware, it will not wipe out humankind. This is what is referred to within the system as “safety.” 


In an interview, Hao acknowledged that it can be hard to take these beliefs seriously when one is not an adherent. However, she argues that it is an “all-consuming belief that a lot of people within [the AI] space start to have” (Mauran, Cecily. “‘Empire of AI’ Author on OpenAI’s Cult of AGI and Why Sam Altman Tried to Discredit Her Book.” Mashable, 2025), suggesting that the insular culture of AI developers leads to an echo chamber in which far-fetched beliefs begin to seem inescapable. Hao’s core argument in Empire of AI is about the importance of Redefining AI Safety Around Present-Day Harms. She argues that the utopian and apocalyptic visions of the TESCREALists are ultimately self-serving, as they generate a sense of urgency around rapid AI development. Instead, those who are genuinely interested in AI safety should turn their attention to the harm AI is doing right now, including copyright violation, environmental devastation, and exploitation of workers.

blurred text
blurred text
blurred text

Unlock all 57 pages of this Study Guide

Get in-depth, chapter-by-chapter summaries and analysis from our literary experts.

  • Grasp challenging concepts with clear, comprehensive explanations
  • Revisit key plot points and ideas without rereading the book
  • Share impressive insights in classes and book clubs