59 pages 1-hour read

Artificial Intelligence: A Guide for Thinking Humans

Nonfiction | Book | Adult | Published in 2019

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Index of Terms

Adversarial Attack

An adversarial attack is a method of manipulating an AI system (often a vision or language model) via misleading data, introducing subtle, carefully crafted input that causes the system to make incorrect predictions with high confidence. Mitchell cites adversarial attacks to illustrate the brittleness of modern deep-learning systems and how shallow pattern recognition differs from genuine understanding. These examples underscore the book’s main argument: Even highly successful AI lacks the robustness and common sense that humans naturally apply.

Analogy-Making

Analogy-making refers to the cognitive ability to perceive a structural relationship between two situations, ideas, or patterns. Mitchell argues that analogy lies at the heart of human intelligence and underpins abstraction, generalization, and concept formation. By highlighting projects like Copycat and Metacat, she shows that current AI systems struggle with analogy, revealing a substantial gap between machine pattern-matching and human reasoning.

Artificial General Intelligence (AGI)

AGI is a hypothetical form of AI capable of performing any intellectual task that a human can, including reasoning, understanding, abstraction, and transfer learning. Throughout the book, Mitchell emphasizes how far current AI is from achieving AGI, noting that many basic cognitive abilities (commonsense reasoning, intuitive physics, analogy) remain unsolved scientific problems. In the book, AGI is a conceptual benchmark for evaluating both hype and realistic expectations in AI research.

Bias (Algorithmic Bias)

In AI, bias refers to systematic errors or unfair behaviors that arise from skewed training data, flawed assumptions, or embedded social inequalities. Mitchell explains that machine-learning systems absorb patterns directly from data and thus often reproduce societal prejudices, such as gendered associations in word embeddings. The concept is central to her argument that AI systems require transparent evaluation and careful ethical oversight.

Brittleness

Brittleness describes an AI system’s tendency to perform impressively on familiar tasks but fail in unpredictable or extreme ways when conditions deviate from its training data or when rare conditions occur (as in autonomous vehicle learning). Mitchell uses brittleness to critique claims that deep learning has achieved humanlike intelligence: Brittle systems lack the flexibility and durability that characterize human understanding. The concept recurs throughout the chapters on vision, language, and autonomous driving.

Commonsense Knowledge

Commonsense knowledge consists of the intuitive understanding of physical, social, and psychological facts that humans learn early in life (such as object permanence, cause and effect, and expectations about human behavior). Mitchell argues that commonsense knowledge is foundational to human intelligence but has proven extraordinarily difficult to encode or learn in AI systems. Its absence in current AI highlights the “barrier of meaning” separating machine performance from human understanding.

Copycat (Mitchell and Hofstadter’s Project)

Copycat is a computational model designed to study analogy-making through letter-string transformations (e.g., mapping “abc → abd” to analogous examples). Mitchell uses Copycat to demonstrate how analogy requires flexible, context-sensitive reasoning rather than rigid rules or brute-force computation. The system’s successes and limits help clarify why analogy remains a central hurdle for AI.

Deep Learning

A subset of machine learning, deep learning uses layered neural networks to extract patterns from massive datasets. Mitchell recognizes deep learning’s transformative achievements (such as speech recognition and image classification) while emphasizing its limitations, including opacity (“black box” structures), brittleness, and lack of humanlike understanding. Deep learning represents modern AI’s power as well as its constraints.

Intuitive Physics

Intuitive physics is the innate or early-learned human ability to understand object behavior (motion, collisions, occlusion, gravity) without formal instruction. Mitchell uses intuitive physics to show the extent to which human reasoning is embodied and experience-based, and to emphasize why training a neural network on static images does not automatically yield similar knowledge. The concept illustrates the gap between raw pattern recognition and cognitive grounding.

Long-Tail (Edge Case) Problem

The long-tail problem refers to the vast number of rare, unusual, or unforeseen scenarios that AI systems encounter in real-world environments. Mitchell uses self-driving cars as an example to show how rare “edge cases” can cause catastrophic failures when systems rely solely on learned correlations. The term emphasizes why scaling data alone is insufficient for robust intelligence.

Mental Model

A mental model is an internal cognitive representation that humans use to simulate situations, predict outcomes, and reason about cause and effect. Mitchell explains that humans continuously run mental simulations (consciously or unconsciously) when interpreting stories, navigating environments, or reading about hypothetical events. AI systems currently lack such flexible, generative modeling, highlighting an important element that is missing from machine intelligence.

Metaphor (Lakoff and Johnson’s Framework)

In the cognitive-linguistic sense, metaphor is the mapping of a concrete domain (like physical movement, warmth, or money) onto an abstract one (like relationships, emotion, or time, respectively). Mitchell draws on this framework to argue that metaphor is not decorative but foundational to human thought, supporting her broader claim that meaning is grounded in embodied experience. This concept helps explain why language understanding remains difficult for systems that lack bodies or perceptual grounding.

Neural Network

A neural network is a computational architecture inspired loosely by biological neurons, capable of learning from data by adjusting connection weights. Mitchell describes neural networks’ capabilities while also critiquing their limitations, particularly their lack of transparency and inability to generalize beyond training distributions. Neural networks are the backbone of modern AI, but also a reminder of how different artificial and biological intelligence are.

Symbolic AI

Symbolic AI is an approach that represents knowledge explicitly using rules, logic, and structured symbols. Mitchell contrasts symbolic AI with deep learning to show the strengths and weaknesses of each: Symbolic systems excel at structured reasoning but falter with perception, while deep learning systems show the opposite pattern. The tension between these paradigms underlies many chapters and frames the search for hybrid approaches that result in more humanlike behavior.

Turing Test

Alan Turing’s proposed criterion for machine intelligence, the Turing Test, evaluates whether a human judge can distinguish a machine’s responses from a human’s in conversation. Mitchell revisits the test to show why passing it would require far more than statistical fluency; it would require understanding, reasoning, and commonsense knowledge. In the book, the test represents an early vision of AI’s goals and a benchmark that remains unmet.

Winograd Schema Challenge

The Winograd Schema Challenge is a test of commonsense reasoning that requires a system to resolve pronoun referents based on subtle contextual cues. Mitchell highlights the challenge to show that even advanced neural models fail in completing tasks that humans solve easily, pointing to the fragility of machine “understanding.” The schema is a recurring example of the gap between linguistic performance and genuine comprehension.

blurred text
blurred text
blurred text

Unlock all 59 pages of this Study Guide

Get in-depth, chapter-by-chapter summaries and analysis from our literary experts.

  • Grasp challenging concepts with clear, comprehensive explanations
  • Revisit key plot points and ideas without rereading the book
  • Share impressive insights in classes and book clubs