57 pages 1-hour read

Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI

Nonfiction | Book | Adult | Published in 2025

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Part 3Chapter Summaries & Analyses

Content Warning: This section of the guide includes discussion of addiction, mental illness, and child sexual abuse.

Part 3, Chapter 10 Summary: “Gods and Demons”

Hao describes her own experience living in San Francisco, where income inequality is very high. While tech workers like Hao live and work in abundance, unhoused people, or people experiencing addiction are left without support. She argues that this is emblematic of how “the tech industry could profess big, bold visions about changing the world […] while ignoring the very problems at its door” (227).


It was within this context that the Effective Altruism (EA) movement took root in Silicon Valley. EA is a philosophy that argues, in part, that the most effective way to help others is to make a lot of money and use it to fund non-profit work focused on the biggest global problems, with many EA adherents identifying AGI as the single greatest danger facing humanity. EA ideology profoundly impacted AI “safety” discourse. A key vector of this was the imprisoned former billionaire Sam Bankman-Fried (SBF) who was a proponent of both EA and AI safety and spent lavishly to promote the ideology before being imprisoned for financial fraud. AI safety discourse became largely focused on one’s understanding of “probability of doom” or p(doom), the term used to describe “how likely you think it is that AGI will lead to catastrophic outcomes” (232) such as human extinction. Those who hold this belief are termed Doomers. In opposition to the Doomers are the Boomers, who believe it is imperative to build AGI as quickly as possible. Altman publicly articulates both Doomer and Boomer beliefs, depending on the audience. Tension between the two groups led to tension within OpenAI.


In early 2022, OpenAI released DALL-E 2, an image generator based on existing Transformer technology applied to images rather than language. Researchers raised concerns that DALL-E 2, and the next model, DALL-E 3, could be used to generate sexually explicit images, including CSAM, but those concerns were largely ignored. Within OpenAI, the Applied team creating commercial products were increasingly dismissive of the Safety team’s concerns about the problems that could result from photorealistic DALL-E 3 images, such as deep fakes and election manipulation. When DALL-E 2 launched in March 2022, it was an immediate hit with the public.


Brockman worked directly on finding more data for the development of GPT-4. This caused frustration and inefficiency as Brockman refused to accept feedback or oversight from anyone in the company, while Altman was “strangely permissive of his behavior” (243). Brockman scraped YouTube for more data and added multi-modal capabilities to the model (e.g., text to voice). It was fed data from AP Science exams to prove to Microsoft’s Bill Gates it could “reason” scientifically. Microsoft agreed to another $10 billion investment. OpenAI rushed to bring GPT-4 to market. The Safety team felt they did not have enough time and resources to sufficiently vet the model, but Brockman minimized their concerns. Cofounder Sutskever felt they were on the brink of developing AGI, but he worried it would not be “aligned,” meaning that it could become “self-aware” and harm humans.

Part 3, Chapter 11 Summary: “Apex”

In October 2022, OpenAI held a company retreat to show off what they had accomplished. At the retreat, Greg Brockman boasted that his wife Anna had diagnosed a complex medical issue she was facing with the help of AI. After the retreat, OpenAI executives were concerned that Anthropic would release a new chatbot before GPT-4 was ready. They decided to push out ChatGPT based on the GPT-3.5 model to beat Anthropic to the market. No one at the company expected ChatGPT to become as immediately popular as it did. It had 100 million users in two months and made OpenAI a “household name.” The company hired hundreds of new employees to manage its rapid growth. Microsoft contributed GPUs to support the new server demands from ChatGPT and pushed its employees to incorporate OpenAI models into their work. In February 2023, OpenAI released a commercial version of ChatGPT. Efforts to make their models more efficient were shelved in favor of supporting commercial products.


Microsoft and OpenAI recognized that they needed more supercomputers to support further growth.

Part 3, Chapter 12 Summary: “Plundered Earth”

In Chapter 12, Hao describes how big tech firms exploit water and mineral resources in the Global South to build their data centers. She notes that the Atacama Desert in Chile has become a popular destination for database center development. Chile has a history of exploitation by European and Western powers dating back to its colonialization by Spain, followed by centuries of resource extraction by mining companies and US interventions in the 1970s to overthrow a democratically elected leader in favor of the neoliberal dictator Augusto Pinochet. Hao argues that the development of data centers for AI is part of this exploitative legacy, as they extract copper, lithium, water, and land from the country for the benefit of international companies like Meta and Google. The massive data centers required for AI are known as “megacampuses.” These megacampuses cover hundreds of acres and require enormous amounts of energy and water. They also require huge mineral inputs, such as copper and lithium, to build the physical infrastructure. OpenAI built supercomputers in Iowa, Arizona, and Wisconsin, and along with Microsoft, it began exploring new locations for its computers.


Hao interviews Sonia Ramos, an activist from a mining family who grew up in Chile and is now an activist protesting the massive scale of new data centers being built in that country. Ramos and other activists in Chile use direct action, legal avenues, public awareness campaigns, and other methods to stop “hyperscalers” like Google from building new megacampuses in the country. Hao gives the example of Google attempting to build a new megacampus in 2019. In its planning proposal for a prospective site, Google buried its intended immense municipal water usage in a 347-page document. Activists protested when the proposed water use was discovered, and Google’s project was blocked. Google then went to Uruguay to attempt to build a megacampus. Environmental activists there likewise protested, and the fight is ongoing. Both Chile and Uruguay are in the midst of a multi-year drought, and water is a precious commodity.


In 2022, Microsoft began to develop a megacampus in Chile. The leftist Chilean government felt it needed to approve this kind of foreign direct investment for the sake of the country’s economy, despite popular opposition and environmental costs. Chilean professor and activist Martín Tinori Rodó argues that “the AI industry today is rooted in colonial ideology […] it imposes its worldview and its technology” (300).

Part 3, Chapter 13 Summary: “The Two Prophets”

In May 2023, Sam Altman went to Washington, DC, to meet with lawmakers about the incredible potential of AI. The same day he testified before Congress, Hollywood concept artists went to the capital to raise awareness about the devastating impact of AI on their industry. They had far fewer resources to attract the attention of the lawmakers and were largely overlooked. Hao describes this as a “darkly comedic illustration of who had power and influence in the AI policy conversation” (303).


OpenAI and other large AI companies pushed the US government to hamper China’s AI production through policies like limitations on GPU sales to China while avoiding regulatory oversight. The Biden administration largely adopted OpenAI’s proposals in its executive order on AI regulation. For instance, it placed an arbitrary limit on processing power without addressing deeper ethical issues with AI use. It did not force AI companies to release their model weights or other data, reducing transparency. The US government seemed to accept the company’s claims about AI without scrutiny. Altman then went on a world tour to promote OpenAI.


Meanwhile, debates between Boomers and Doomers at OpenAI were ongoing. Sutskever became increasingly concerned about AI “safety” and proposed creating a team focused on “developing new alignment methods for superintelligence” (315) to prevent a rogue AGI program. OpenAI characterized the team’s work as a Manhattan Project, in that they wanted to create and control a world-destroying weapon before US competitors (e.g., China) could do so. They continued development on an AI agent that could act with greater autonomy.


In 2023, OpenAI lost three board members and did not replace them. The board now consisted of Altman, Brockman, Sutskever, and three “independent” members: Adam D’Angelo, Tasha McCauley, and Helen Toner. Toner is closely tied to the EA and AI “safety” community. Communication between Altman and the board began to break down. It appeared Altman was deliberately hiding information from them. For instance, D’Angelo was shocked to learn “at a dinner party” (324) that the OpenAI Startup Fund was wholly owned by Altman himself, rather than by OpenAI.

Part 3, Chapter 14 Summary: “Deliverance”

On September 25, 2023, Sam Altman faced new scrutiny when Elizabeth Weil published a critical profile of Altman in New York magazine that included discussion of his sister, Annie, and the abuse she alleges she suffered at the hands of Altman and her family. Hao interviewed Annie and obtained corroborating evidence, such as medical records, for some of her claims. Annie had been academically gifted as a child, but her career was hampered by chronic illnesses including polycystic ovary syndrome (PCOS) and obsessive-compulsive disorder (OCD). Annie alleges that the Altman family cut off her financial support, forcing to be unhoused and become a sex worker to survive. After their father died in 2018, Annie counted on using her inheritance to support herself, but instead, her mother had the money placed in a retirement trust. Sam occasionally extended offers of limited financial support to Annie, but ultimately Annie was required to fend for herself. In 2021, a therapist diagnosed Annie with generalized anxiety and post-traumatic stress disorder (PTSD) connected to “a personal history of sexual abuse in childhood” (334). Hao speculates that Annie’s turn to sex work might have triggered her memories of abuse.


Annie posted publicly about the abuse she experienced in her family “mostly [by] Sam Altman” (335) on Twitter in 2021. In 2023, Annie shared her experiences with Weil, who had reached out to her for comment. In 2025, Annie filed a civil suit against Sam that alleged he had sexually abused her from the age of three.


In the lead-up to the article’s release, Altman began to tell people Annie had “borderline personality disorder,” (337) although she had never been diagnosed with it, in an attempt to discredit her claims. In April 2024, Hannah Wong, OpenAI’s chief communications officer, met with author Karen Hao and emphasized Annie’s “mental health challenges” (339) to her. Hao argues that Annie’s experience is similar to that of others “sidelined or harmed by the empires of AI and their vision” (339). Annie is relatively powerless in the face of a wealthy billionaire who can minimize her claims of abuse.

Part 3 Analysis

In Part 3, Hao expands on her argument about Resource-Driven AI Expansion as Neocolonialism. She explores two prongs of resource exploitation: data scraping and megacampus development in the Global South. These two tendencies are illustrated with reference to two key examples: the scraping of YouTube for data and attempts to build data centers in Chile and Uruguay, respectively. Hao’s argument about data collection is linked to neocolonialism only by implication. She describes how OpenAI, led by Greg Brockman, collected data from YouTube to train its GPT expansion in a way that “violated the platform’s terms of service” (244). This is similar to how colonial powers took resources from unwitting or marginalized populations with little oversight or legal restrictions. This comparison is made more explicit in Hao’s discussion of the efforts of Google and Microsoft to build large data centers in South America. She buttresses her argument about the colonial nature of this effort by extensively quoting activists who describe corporate resource extraction in neocolonial terms. Hao’s discussion of data centers does not specifically reference OpenAI, although Microsoft is a key OpenAI partner. Instead, she describes the neocolonial practices of Google and Microsoft while implying that OpenAI acts in a similar fashion.


These examples, along with others in Part 3, contribute to the theme of The Need for Accountability in Big Tech. For instance, OpenAI’s ability to willfully violate YouTube’s terms of services without fear of consequence points to the impunity with which they are able to act in a domain with little government oversight. The discussion of Google’s attempts to bury its planned municipal water usage in a 347-page environmental filing illustrates how difficult it is even for governments and activists who actively intend to limit Big Tech’s actions to do so. When pressured, they obfuscate, as when Google’s lawyer mistranslated elements of their presentation to the local government in Chile (289). As Chapter 13 illustrates, it is not only governments in the Global South that fail to hold Big Tech to account. Hao describes how OpenAI and other Big Tech companies effectively lobby Washington lawmakers to write regulations favorable to the companies. This is a practice known as regulatory capture. In an interview, Hao notes that this dynamic has only accelerated in the time since the book was written:


The story of the empire of AI is so deeply connected to what’s happening right now with the Trump Administration and DOGE and the complete collapse of democratic norms in the US, because this is what happens when you allow certain individuals to consolidate so much wealth, so much power, that they can basically just manipulate democracy (Mauran).


In Chapter 14, Hao discusses the allegations of Sam Altman’s sister Annie against him and his family. These allegations are shocking and unsettled; at the time of this writing, the litigation is still pending. However, Hao works to corroborate as many of Annie’s claims as she can through interviews with Annie and reference to primary documents like doctor’s notes and emails. While the most explosive claim, that Sam Altman sexually abused his sister, remains unverified, Hao uses what she is able to corroborate to argue that Annie’s experience is a microcosm of the harms caused by “imperial” AI and its concentration of wealth and power as described elsewhere in the work.

blurred text
blurred text
blurred text

Unlock all 57 pages of this Study Guide

Get in-depth, chapter-by-chapter summaries and analysis from our literary experts.

  • Grasp challenging concepts with clear, comprehensive explanations
  • Revisit key plot points and ideas without rereading the book
  • Share impressive insights in classes and book clubs