A World Appears: A Journey into Consciousness

Michael Pollan

60 pages 2-hour read

Michael Pollan

A World Appears: A Journey into Consciousness

Nonfiction | Book | Adult | Published in 2026

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Chapter 2Chapter Summaries & Analyses

Content Warning: This section of the guide includes discussion of drug use.

Chapter 2 Summary: “Feeling”

Magic?


In the first section, “Magic?”, Pollan meets with psychologist Daniel Gilbert, who reframes the hard problem of consciousness by questioning machines’ ability to simulate human experience. Psychologists cannot name “a single thing” that can’t theoretically be done by a machine (64). Gilbert warns Pollan to “be wary of the desire for magic” (65), by which he means the hope that consciousness will prove to be beyond the explanatory reach of science and remain a mystery. Because no scientist or philosopher has been able to solve the hard problem of consciousness, it has “an aura of magic” around it (65).


While Pollan does not believe consciousness is magic, he does argue for keeping an open mind about nonscientific ways to explain it, as Western science has continually failed to reduce consciousness to mere matter and information processing.


Being Is Feeling


“Being Is Feeling” examines two prevailing but contradictory concepts of consciousness: that it is a matter of information processing, or that it is rooted in biological phenomena. If one is true, the other is not. The question of how feelings enter the equation may help decide between the two.


Despite the importance of feelings (also called emotions or affect), most scientists have not included feelings in the information processing theory of consciousness. This began to change with the work of neurologist Antonio Damasio in his 1994 book, Descartes’ Error. According to Damasio, Descartes erred in depicting thought as the basis for subjective existence. Instead, he argues that feelings are “the vital bridge linking mind and body, a mental phenomenon deeply rooted in our flesh” (69). Damasio proposes finding the origins of consciousness in the brainstem where feelings arise alongside physical sensations processed by the nervous system.


Like Levin and Friston, Damasio believes that this level of consciousness evolved from homeostasis. Basic “homeostatic feelings” (73) such as hunger, thirst, or being too warm/cool, evolved to help organisms sense their environments and adjust accordingly to maintain homeostasis and survive. These homeostatic feelings likely evolved at the same time as nervous systems in animals and are the first form of consciousness.


Damasio posits that more complex feelings arose because human needs are not only biological but also social and psychological. However, Pollan notes, this concept still does not answer the hard problem: “how, exactly, does a feeling become conscious? And to whom?” (76). Damasio argues that feelings are automatically conscious, i.e., living beings feel because they are conscious and they are conscious because they feel.


Toward Feeling Machines


In “Toward Feeling Machines,” Pollan supposes that if a physical body is required for feelings (and feelings are required for consciousness), then a machine cannot become conscious. However, neuroscientist and psychoanalyst Mark Solms (a protege of Damasio and Friston) disagrees.


Damasio, Friston, and Solms all agree on the supremacy of homeostasis for the origins of consciousness. However, while Damasio believes homeostasis is a purely biological phenomenon, Friston and Solms posit that it can be applied more broadly to all “self-organizing systems” (80), which could be an ant colony, an ecosystem, or even the flow of traffic. Thus, Friston and Solms ground the concept of homeostasis and consciousness in “the deeper bedrock of physics, information theory, and predictive processing models of the mind” (80).


Returning to the free-energy principle, Friston and Solms argue that feelings are another way of sensing and responding to entropy (or uncertainty, as Solms prefers to call it). According to Solms, uncertainty generates feeling and feeling generates consciousness. When uncertainty increases, the body generates feelings to gain the system’s conscious attention and guide decision-making and actions to mitigate that uncertainty. They argue that a feeling cannot happen without consciousness because it would not be a feeling if it was not felt, which they believe brings them close to solving the hard problem.


Chalmers, however, disagrees. He argues that this theory does not bridge the gap between the neurology of feelings and the lived experience; nor does it explain why we consciously experience a feeling or any other mental operation. Conversely, Friston, Damasio, and Solms suggest that Chalmers’ insistence on the split between subjective experience and physical reality is unnecessary and makes the hard problem more difficult than it should be. However, Damasio splits with Friston and Solms on their efforts to ground consciousness not in biology but in the abstractions of information theory.


To test this theory of consciousness, Solms is attempting to build a conscious AI with the help of a team of physicists, computer scientists, and roboticists. They have devised an AI algorithm that has a point of view and a goal to continue its existence by reducing uncertainty. Solms believes that this will lead the program to develop feelings and then consciousness. Pollan suggests that any such feelings would be artificial, to which Solms responds that from the interior point-of-view of the system, they would be real, subjective feelings. Pollan counters that the characters in a novel also believe they possess subjectively real feelings within the system of the novel, but it is not the same thing.


Conversations With LaMDA


In “Conversations with LaMDA”, the topic of conscious AI brings Pollan to a discussion of large language models (LLMs) like ChatGPT. The prospect that these AI models could become conscious was laughable until 2022, when Google engineer Blake Lemoine claimed that Google had created a sentient AI called LaMDA. Through dialogue with the AI, Lemoine became convinced that it had feelings and awareness. Google fired him for publicly sharing proprietary information and rejected the claim that LaMDA was sentient.

 

In an interview with Lemoine, Pollan suggested that LaMDA was doing what all LLMs do and “simply building plausible sentences by predicting the most probable next word” (96). Lemoine suggested that such sophisticated prediction requires understanding. However, Pollan became increasingly convinced that LaMDA’s dialogue and personality was in fact a mirror or impersonation of Lemoine’s speech patterns and ideas, a common result of LLMs trained to converse with human partners.


No Obvious Barriers


“No Obvious Barriers” continues the discussion of AI. While Lemoine’s claims have been dismissed as hype, they triggered a larger discussion that has intensified in recent years. There had been skepticism and even a sense of taboo in the tech community until 2023, when a group of leading computer scientists and philosophers published a report titled “Consciousness in Artificial Intelligence” (referred to the as the Butlin Report), in which they concluded that while there was no current AI system capable of consciousness, there were also “no obvious barriers to building conscious AI systems” (98). This group has since modified the statement to read “no obvious technical barriers to building AI systems which satisfy [the] indicators” (98) of consciousness, implying that such a system could potentially look conscious without actually being so.


The report represents a major shift in thought on the topic, which could have serious implications for the definitions of humans as a species. As Pollan states, he and many others have grown comfortable with the idea of sharing consciousness with animals and plants, but expanding that concept to machines is unsettling. As a humanist, Pollan is skeptical and uncomfortable with the idea.


The more Pollan studies the Butlin Report, however, the less worried he is that conscious AI is close to becoming a reality. He suggests that the entire report is built on the dubious premise of “computational functionalism,” which states that “performing computations of the right kind is necessary and sufficient for consciousness” (102). In other words, the report’s conclusions are founded on the unquestioned assumption that brains function essentially like computers, even though research has shown that this metaphor does not come close to grasping the true complexity of the brain.


Additionally, the report offers no standard for determining whether an AI is conscious or not. The only metric would be the AI’s claim to consciousness, but an AI trained on everything that has ever been written about consciousness could generate such claims without them being true. Lastly, none of the concepts of conscious AI takes into consideration physical embodiment, the idea that consciousness requires both a mind and a body.


Our Mortal Flesh


The section “Our Mortal Flesh” develops this conversation. To Pollan’s surprise, Damasio has considered how one might create robots with feelings. However, even in this he insists that embodiment is necessary. He argues that the need to survive in the face of death (entropy) is the primary reason for feelings, and therefore a machine must have a sense of vulnerability and mortality.


To that end, he and a former student, Kingson Man, are trying to create feeling machines, for which the AI must have a body made of materials that can be damaged. Moreover, they refer only to developing the “artificial equivalent of feeling” (109), a distinction from Solms’ larger claims to generate real emotion. Yet even Solms qualifies his claims that such a conscious AI would have a “functional equivalent” (112) to feelings, which Pollan notes is not the same thing. For instance, a plane has the functionally equivalent ability of flight as a bird but does not make it a bird.


Pollan also notes that any AI trained in the artificial world if the internet could not be the same as a human because its experience of reality is different. He equates the internet with Plato’s cave where AI are trapped with the shadows of internet content as their only source of information. If such a being could emerge from the cave, it might become a “consciousness so radically different from our own as to demand a new label” (116).


Coda: Magic Redux


In the chapter’s final section, “Coda: Magic Redux,” Pollan discusses his psychedelic experience in his garden with Kingson Man. In response, Man shared a recent psychedelic experience he had, which altered his assumptions. During his experience, he felt that the world was connected by the substance of love. Afterward, he felt that a robot might be trained to mimic the behaviors of love but would not actually feel it. He now suspects that “there’s a spark of the divine in us, and nothing we could build is going to be at that level” (118).


Pollan wonders if hard science is the best way to find out. Instead, he suggests that examining consciousness from within—i.e., phenomenologically—might be the better method.

Chapter 2 Analysis

As Pollan shifts his study from sentience in Chapter 1 to feelings in Chapter 2, the book introduces important distinctions and contradictions in the study of consciousness, building the case for The Limits of Western Rationality. The first distinction is Daniel Gilbert’s warning against seeing consciousness as “magic” because it is not yet fully explicable. The second is the question of whether consciousness requires a biological body or is purely information processing. This leads some researchers to argue that a sufficiently complex computer can achieve or emulate consciousness, while others believe that consciousness must be embodied to exist.


This leads to a timely debate about the possibility of AI achieving consciousness, contributing significantly to the theme of Sentience as a Challenge to Human Uniqueness. This prospect unsettles Pollan as a humanist, and he notes that “some important threshold had been crossed, […] this had to do with our very identity as a species” (98). The creation of a conscious AI would, in his view, irrevocably erase the line between living and nonliving things, further contributing to the “dethronement of humankind” (19) discussed in Chapter 1. By the end of his exploration in this chapter, however, he concludes that scientists and engineers are not remotely close to creating such a conscious AI because they still can’t define what constitutes consciousness.


This conclusion stems from science’s continued bias toward the mind-body split, highlighting the theme of The Impact of Biases on Science. The computer scientists, engineers, and philosophers behind the Butlin Report dismiss the arguments of those like Damasio, who claim that a physical body capable of injury and death is a prerequisite for consciousness. Nor do they take the importance of feelings into account. This is an important example of the way biases that favor rationality lead scientists to ignore crucial information or ideas that might impact their research.


Additionally, the inability of Western science and philosophy to properly account for these distinctions has led some to conclude once again that consciousness may be beyond science’s ability to explain. In the last section of Chapter 2, “Magic Redux,” Kingson Man confesses that a recent psychedelic experience made him question his research and conclude that fully understanding or replicating consciousness may be beyond anyone’s reach. This is important because it reassures Pollan that his suspicions are not a delusional desire for magic but a simple admission that some existential mysteries are beyond the scope of Western modes of understanding.

blurred text
blurred text
blurred text

Unlock all 60 pages of this Study Guide

Get in-depth, chapter-by-chapter summaries and analysis from our literary experts.

  • Grasp challenging concepts with clear, comprehensive explanations
  • Revisit key plot points and ideas without rereading the book
  • Share impressive insights in classes and book clubs