57 pages • 1-hour read
A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
Artificial intelligence (AI) is a vaguely defined term that broadly refers to machine learning models. Dartmouth professor John McCarthy coined the term in 1956 to describe “the pursuit of machines capable of automatic behavior” (89). Hao critiques the term AI as a “marketing tool” that encourages anthropomorphizing and obscures how these machine systems actually work.
An AI agent an AI model capable of semi-autonomous behavior. AI agents represent the frontier of AI technology. Though they operate semi-autonomously, they must be provided with highly specific goals and the tools to achieve them—including web searches, external datasets, and other AI agents optimized for related subtasks. If not carefully managed, AI agents in business and institutional settings can raise significant security and privacy concerns.
AGI is a term that broadly refers to a theoretical artificial intelligence model that can “digitally replicat[e] true human-level intelligence” (47). However, there is no consensus as to what “intelligence” means, and the meaning of AGI is continually shifting. In Empire of AI, Hao argues that this vagueness allows OpenAI to define AGI in whatever way suits the company at any given moment. She argues that the company uses this nebulous, almost mystical goal to “interpret and reinterpret its mission accordingly, to entrench its dominance” (402).
Alignment is a term closely associated with the Oxford philosopher Nick Bostrom. He argues that AI must be aligned with human values to ensure that it “extrapolate[s] beyond explicit instructions to achieve its objectives without harming humans” (26). Otherwise, Doomers argue, an AI could take its instructions too literally and pursue its goals at the expense of human life.
Boomers is the term Hao uses to refer to those who belong to the “e/acc” or “effective accelerationist” movement. Boomers believe that “technological progress is not just universally good, it’s a moral imperative to make that progress as fast as possible” (232).
Compute is an industry term referring to the amount of computing power used to train or operate an AI model. The more graphics processing units (GPUs) a company has, the more compute it can use.
In the field of machine learning, connectionists believe “that intelligence comes from learning” (94). They design AI models that “[mimic] the ways our brains process signals and information” (94). This approach has led to the development of neural networks, which form the basis of most contemporary commercial AI models.
Deep learning is the term coined by Carnegie Mellon University professor Gregory Hinton to describe multi-layered neural networks capable of more nuanced predictions than earlier forms of neural networks because they benefit from the exchange of information between network layers.
Doomers is the term Hao uses to refer to those who have a quasi-religious belief that a sufficiently advanced AGI could escape human control and cause human extinction. They evaluate AI systems based on “p(doom)” or “probability of doom,” the probability “that AGI will lead to catastrophic outcomes” (232). They believe that those developing AGI need to be focused on this possibility in order to prevent catastrophe.
GPUs are a kind of computer processor once most commonly used to “quickly render graphics on computers” (60). GPUs are also well-suited to training AI models as they are capable of “crunching massive amounts of numbers in parallel” (60). Nvidia is the leading provider of GPUs for AI companies like OpenAI. The more GPUs networked together in a supercomputer, the more training data can be processed for an AI model.
The connections between nodes or “neurons” in neural networks are driven by the model’s “weights.” Neural networks are probabilistic. The weights inform the strength of connections between data points. Initially distributed randomly, model weights are then fine-tuned during testing, including through reinforcement learning from human feedback (RLHF). Model weights contribute significantly to the accuracy and alignment of an AI model; a slight tweak to those weights can result in dramatically different outcomes.
Neural networks are “data-processing software loosely designed to mirror the brain’s interlocking connections” (94). They rely on “fuzzy logic,” or probabilistic connections between data points to generate outputs. This technology is the basis for modern AI systems.
RLHF is the process by which AI models are aligned to human preferences to ensure outputs are helpful and not dangerous to humanity. RLHF is done by human labeling and evaluating the inputs and outputs of AI training data. Hao documents how RLHF data workers in the developing world are exploited by third-party contractors who pay low wages and provide little to no protections for workers who are exposed to violent or traumatizing content.
In the domain of machine learning in Silicon Valley, safety refers primarily to the need to ensure that AI models will not seek to annihilate humanity or otherwise cause catastrophe. This is done by testing to ensure alignment. Doomers are particularly concerned with AI “safety.” Hao critiques this view of safety for not sufficiently considering other AI ethics problems, such as labor exploitation and resource use.
Scale describes the process of scaling up an AI model to incorporate more data and processing power to improve the outcomes of the model. Scaling laws is a term that refers to “the relationship between the performance of a deep learning model and its volume of data, amount of compute, and number of parameters” (123). OpenAI and many other AI developers focus on scale as the best possible way to advance AI models.
In the field of machine learning, symbolists believe that “intelligence comes from knowing” (94) and that the best way to achieve AI is to “encode symbolic representations of the world’s knowledge into machines” (94). IBM’s Watson computer was an early example of a symbolist AI system. The symbolist approach was broadly abandoned when the systems broke down as they grew in complexity.
Y Combinator (YC) is a leading Silicon Valley startup accelerator. YC was founded in 2005 by Paul Graham and Jessica Livingston. YC provides seed money and support to promising tech startups. Sam Altman was a member of the inaugural group of YC projects and later went on to be its president from 2014 to 2019 until leaving to work full time at OpenAI.



Unlock all 57 pages of this Study Guide
Get in-depth, chapter-by-chapter summaries and analysis from our literary experts.