57 pages 1-hour read

Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI

Nonfiction | Book | Adult | Published in 2025

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Part 1Chapter Summaries & Analyses

Content Warning: This section of the guide includes discussion of child sexual abuse, racism, and mental illness.

Part 1, Chapter 1 Summary: “Divine Right”

Chapter 1 describes the origins of OpenAI and its key founder, Sam Altman. In 2015, Altman held a dinner party with Tesla and SpaceX founder Elon Musk to discuss “the future of AI and humanity” (23). At the time, Altman was the president of Y Combinator (YC), a leading seed fund for Silicon Valley startups. After conversations with tech leaders Demis Hassabis of DeepMind Technologies and Larry Page of Google, Musk had become concerned about the “existential risk” posed by AI technologies. Musk worried that AI would become capable of eluding human controls and cause the death of humanity to achieve its own ends. He was concerned people like Larry Page did not take this risk seriously. He was also inspired by warnings about AI risk from controversial philosopher Nick Bostrom’s book Superintelligence: Paths, Dangers, Strategies. Bostrom argued that AI should be designed to “achieve its objectives without harming humans” (26). Altman shared Musk’s concerns about AI. At the dinner party in 2015, they decided to create an AI company, OpenAI, to head off this potential catastrophe by building an AI they could control.


Sam Altman was born in Chicago in 1985 to a doctor and real estate developer. He has two younger brothers, Max and Jack, and a younger sister, Annie. Sam, a sensitive child who came out as gay in high school, was driven from a young age and excelled at school. He attended Stanford but dropped out to work on his startup, Loopt, which was funded by YC. While in Silicon Valley, he became known for his skills at making deals and networking. However, while running Loopt, senior leaders accused Altman of lying and self-enrichment. During this time, he cultivated relationships with Paul Graham, the leader of YC, and Peter Thiel, a tech billionaire and founder of PayPal and Palantir. Paul Graham stepped down from YC and gave the position to Altman when he was 28. Peter Thiel advocated for start-ups to drive for a monopoly position in a marketplace. Altman adopted this approach with the start-ups he funded through YC.


While Altman was at YC, his sister Annie accused him of having sexually abused her during her childhood. Annie was later cut off from the family, and Altman denied the charges. The claims were still in litigation at the time of Empire of AI’s writing. Hao argues that they are representative of “how much the quest for [AI] dominance” relies on “a small handful of fallible people” (45).

Part 1, Chapter 2 Summary: “A Civilizing Mission”

Chapter 2 describes the establishment of OpenAI and the background of its first founders. The first two cofounders were Greg Brockman and Ilya Sutskever. Brockman, a founder of the payment processing startup Stripe, is wealthy and personable. Sutskever is a Russian Israeli math prodigy whose first startup, an AI-like image identification software, was acquired by Google. Brockman and Sutskever believed it would be possible to create AGI, or artificial general intelligence, an AI model that could replicate human intelligence. The founders decided to start OpenAI as a nonprofit to highlight that their goals were for the good of humanity rather than to turn a profit. They insinuated they would make their research publicly available, even though they acknowledged privately that they would withhold their scientific breakthroughs.


Open AI launched in December 2015 with Altman and Musk serving as co-chairs. In 2016, there were increasing global concerns about the harms of social media and tech more generally, especially as the big tech companies became ever more entwined with the military-industrial complex through deals like Google’s partnership with the Pentagon to make “AI-powered surveillance drones” (52). OpenAI positioned itself as “corrupted by neither profit nor state power” (52). However, critics like Timnit Gebru questioned the lack of diversity in AI research, including at OpenAI.


In July 2016, OpenAI brought on tech researchers Dario and Daniela Amodei. They were passionate about AI “safety,” as in the work to prevent a rogue AI from leading to catastrophe or human extinction, though Hao points out that this conception of safety does not encompass racial bias, ecological resource depletion, misinformation, or other negative consequences of AI use.


The early years of OpenAI were chaotic as the company’s remit was vague and ill-defined and its leadership was inconsistent. Sutskever pushed for OpenAI to acquire more “compute” or processing power as the key to their goal of AGI. To that end, they needed to buy thousands of high-end chips from chipmaker Nvidia and then network them together. The question over how to finance this acquisition led to tension between Musk and the other founders. Musk wanted to transform OpenAI into a for-profit company, but the others objected. Musk eventually withdrew from the project, taking his money with him, deciding to do his own AI research within Tesla.


In 2018, Altman eventually decided to create a for-profit limited partnership, or LP, under the auspices of OpenAI, to sell services to raise money for research and development for the non-profit. Returns to investors in the LP would be capped, although at absurdly high rates of up to 100 times the initial investment. In early 2019, Altman stepped down from YC and became CEO of OpenAI.


OpenAI wanted Microsoft to invest in their LP. OpenAI had developed a large language model called GPT-2 that was capable of producing “passages of text that closely resembled human writing” (71). They presented GPT-2 for Bill Gates, the Microsoft founder, and he agreed to the Microsoft investment. In July 2019, Microsoft invested $1 billion in OpenAI.

Part 1, Chapter 3 Summary: “Nerve Center”

Two weeks after Microsoft’s first investment in OpenAI, author Karen Hao visited the OpenAI offices, which they shared with Musk’s Neuralink company. She had been following OpenAI closely for many years as a reporter for MIT Technology Review, and she was there to do a profile of the company. Hao found the staff cagey and her access to the company’s information limited. The cofounders, Sutskever and Brockman, told her in interviews that they believed they were developing an AGI that would benefit humanity by providing “economic freedom” by “decoupl[ing] the need to work from survival” (78), although they were vague about the “concrete details” of what that means. They saw their role as pushing AI technology forward. Cofounder Brockman described developing AGI as his lifelong dream, and he pushed his team hard to accomplish it. He “insisted” that the creation of the LP for-profit structure did not detract from their overall non-profit, humanity-benefiting goals. He felt it was necessary to accomplish their goal of remaining on the cutting-edge of AI development. Hao argues that “the need to be first or perish” (84) was used to justify all of OpenAI’s later, exploitative actions. In February 2020, Hao published her profile of OpenAI. It caused a minor controversy within the company, especially because Hao had access to leaked documents.

Part 1, Chapter 4 Summary: “Dreams of Modernity”

Hao cites Power and Progress by Daron Acemoglu and Simon Johnson to argue that many new technologies, including AI, follow a common progression. New technologies purport to be revolutionary advancements for all of humanity, but often in fact reflects the “vision of a narrow elite” (88) that created the technology. The new technology results in further concentrating power within that elite while exploiting the most vulnerable.


Chapter 4 is a brief history of AI. In 1956, Dartmouth professor John McCarthy organized a workshop about “automata studies,” or machine automation. When there was limited interest, he rebranded the subject to “artificial intelligence.” This term attracted much more interest from funders and scientists. However, the meaning of “intelligence” is vague and obfuscatory, because no one can agree what “intelligence” really means. The term “artificial intelligence” causes people to inappropriately anthropomorphize computer technology. Researchers in the field seek to create computer technologies that appear to replicate human capabilities, such as sight, hearing, reasoning, etc. Due to the shifting meanings of “intelligence,” the expectations of AI technologies also shift continuously. Now, companies like OpenAI claim that AGI will solve the most serious and intractable of the world’s problems, and therefore it must be created at any cost.


In the early days of the field, two groups formed with different understandings of how to create “intelligent” machines. Symbolists believed intelligence came from knowledge and that AI should therefore encode the world’s knowledge using symbolic language and logical reasoning akin to human thought. Connectionists believed that intelligence came from learning and that AI should mimic how human brains take in new information (though not how they reason from that information), building artificial neural networks, comprised of interconnected nodes that process information and connect it to other information, to extrapolate patterns from vast volumes of data. These resource-intensive systems were hard to commercialize in an era with limited computing power. For a long time, symbolists led the field. One major breakthrough of that era was the 1958 AI system chatbot ELIZA. However, breakthroughs in symbolism slowed as systems became increasingly complex.


In the 1980s, researchers at CMU and UCSD improved the quality of neural networks, leading to new breakthroughs in machine learning called “deep learning.” These networks are trained by humans on large data sets that provide them with a set of probabilities they can then apply to novel situations. For instance, a neutral network is “trained” with a large data set featuring images labeled by humans as depicting either “dog” or “cat.” If it is then given new picture of either a dog or a cat, it can identify it based on a probabilistic calculation of features common to either “dog” or “cat” that it “learned” from the training data set. This is a core element of how contemporary AI systems work, but on a much larger, more complex scale. Because they use probabilistic models, neural networks “reason” inefficiently and may frequently give incorrect answers. However, they are easy to commercialize, unlike symbolist AI systems like IBM’s Watson.


In the 2010s, Google began to explore neural network commercialization opportunities more seriously. They and other companies benefited from the explosion of surveillance capitalism and social media in the digital age, which provided enormous datasets from which to train models. Other companies capitalized on vulnerable populations who could easily be exploited for their data, such as the use of surveillance data from the Johannesburg slums being used to train AI facial recognition software, a practice known as “data colonialism.” Corporate investment into AI development boomed.


Neural network-based systems continue to face myriad problems. For instance, a small tweak of the input data can result in wildly different responses, as when a self-driving car killed a woman in 2018 because the camera did not recognize her as a person because she was pushing a bike. However, OpenAI cofounder Sutskever believed these problems could be overcome with more “compute,” or processing power, and more data. “Pure connectionist” approaches remain the focus of commercial AI development. Others, like Gary Marcus of NYU, argue that these problems can only be overcome with a hybrid approach of both symbolism and connectionism.


Introduced in 2022, OpenAI’s generative AI program ChatGPT was able to push the boundaries of AI technology by creating a dataset and processing abilities that “hit the limits […] of what the world has available” (111). It gained mainstream popularity through clever marketing that played on the human tendency to anthropomorphize chatbots. However, it remained prone to “hallucinations,” or false assertions born of the probabilistic architecture that undergirds the whole system. It is also prone to cybersecurity hacks and can be used to “amplify discriminatory and hateful content” (114) due to its reliance on datasets scraped from the darkest corners of the internet.


Hao argues that AI researchers should reconsider the AI model pushed by OpenAI: the pure connectionist model based on amplifying processing power and data sets. She argues they should continue to explore the possibilities of symbolist or hybrid AI models that might have fewer input costs and less harmful outcomes.

Part 1, Chapter 5 Summary: “Scale of Ambition”

In Chapter 5, Hao discusses OpenAI’s drive to “scale [up],” a process driven by cofounder Ilya Sutskever. Sutskever has a “die-hard belief” that the problems of connectionist, deep learning AI models can be solved through scaling up computing capacity and increasing the size of data sets. Sutskever’s tendency toward bluntness could create internal challenges and PR problems, as when he declared at a conference that “AGI would eventually disappear all jobs” (119).


In 2017, Google introduced a new type of neural network called the transformer, which was capable of analyzing and utilizing “long-range patterns,” requiring less training time than previous neural networks (120). Sutskever was impressed by the technology’s ability to analyze text in larger contexts than other neural nets and began experimenting with scaling a transformer for OpenAI to have it predict the next word in a sentence. In 2018, OpenAI introduced its first transformer model, the Generative Pre-Trained Transformer, GPT-1.


OpenAI researcher Alec Radford worked with Dario Amodei to scale up the GPT-1 model with more compute and data. The scaled-up model, GPT-2, had greater coherence and capability than GPT-1, but it also “could quickly veer into conspiracy theories” (123). OpenAI began promoting GPT-2’s capabilities without sharing the science behind it. They justified withholding the information because of concerns that the technology could be used for “disinformation or propaganda” (126). This decision attracted a lot of criticism from other researchers in the field. Instead of releasing GPT-2 to the public, OpenAI only released it to select professionals and companies. Dario Amodei saw scaling the pure language model of GPT-2 as the fastest way to “safely” achieve AGI.


OpenAI began developing GPT-3, which they wanted to be even bigger. Amodei used Microsoft’s recent investment of 10,000 Nvidia chips to develop the model. This gave them a huge amount of processing power. They also needed a bigger data set, so OpenAI resorted to using lower-quality data scrapes. Because the low-quality data made the model more prone to producing violent, dangerous, or incorrect information, OpenAI relied on vulnerable workers in developing countries such as Kenya, who were paid wages as low as $2 a day to train models by moderating inputs and outputs, a process called reinforcement learning from human feedback (RLHF).

Part 1 Analysis

The Prologue opens in medias res, describing a key moment in the history of OpenAI when tensions came to a head, culminating in the unsuccessful ouster of Sam Altman in November 2023. In Part 1, Hao goes back in the chronology to describe the origins of the conflict that would reach its crisis point in November 2023. She gives a brief biography of Sam Altman, CEO of OpenAI, and the two cofounders, Greg Brockman and Ilya Sutskever. She also gives a brief history of the field of artificial intelligence as a whole and the form of AI developed by OpenAI, which prioritizes scale as the key to developing “AGI.” These introductory chapters weave together the two dominant forms of reporting established in the Prologue: First, Hao relies on the journalistic access that comes with her history in the tech sector to write detailed reports on the personalities behind OpenAI. For instance, she describes how Ilya Sutskever’s “eyes glazed over as he […] painted a science fiction-like vision of the future” in a meeting while the Bay Area’s sky was “orange from nearby forest fires,” an image that emphasizes the apocalyptic contrast between the aspirations of OpenAI leadership and the harsh present realities facing humanity (120). Second, Hao uses her knowledge and moral framework to weave in moral judgements about the ideology underlying commercial AI development. For instance, she uses adjectives and modifiers to express her skepticism about the claims of tech leadership, as when she writes, “so-called AI safety [emphasis added]” (55) or when she notes the “disturbing” conversations about eugenics common to AI “safety” advocates (129).


In Chapter 3, Hao describes interviews with OpenAI leadership that took place during the brief window when she was granted access to the company to write a profile in 2019. That Chapter 3 is one of the shortest chapters in Empire of AI is indicative of the company’s secretive culture and the difficulty of gaining access to company leaders. Hao notes that the interviews with leadership were frustratingly vague and roundabout, writing “my conversation with Brockman and Sutskever continued on in circles until we ran out the clock” (79). She is skeptical of their claims, noting at one point that Sutskever’s “burst of emotion” felt “somewhat performative” (79). She then goes on to detail how she was prevented from sitting in on meetings, exploring the office, and speaking with lower-level employees. Her experience is an early indication of The Need for Accountability in Big Tech. After Empire of AI’s publication, Altman sought to discredit Hao’s work on X (formerly Twitter), writing, “no book will get everything right, especially when some people are so intent on twisting things.” This response illustrates the company’s self-protective culture and marks Hao’s pivot from access journalism to a more critical mode of reporting. Access journalism relies on writing largely flattering coverage of the powerful to ensure continued access. Because Hao wrote a work critical of OpenAI following her 2019 reporting trip, she was denied future interview opportunities with the company and had to rely on other forms of reporting. This loss of access precipitates a radical shift in the book’s perspective, as Hao instead begins interviewing the ordinary people around the world whose lives are affected by AI. This move from a top-down to a bottom-up perspective informs the book’s concern with Redefining AI Safety Around Present-Day Harms. The most urgent problem with AI as practiced by OpenAI, Hao argues, is not that it might one day lead to the end of humanity but that it is causing environmental devastation, rampant misinformation, and labor exploitation right now.

blurred text
blurred text
blurred text

Unlock all 57 pages of this Study Guide

Get in-depth, chapter-by-chapter summaries and analysis from our literary experts.

  • Grasp challenging concepts with clear, comprehensive explanations
  • Revisit key plot points and ideas without rereading the book
  • Share impressive insights in classes and book clubs