68 pages • 2-hour read
A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
Distributed responsibility is a philosophical theory that challenges the common view that any single person or action is responsible for an outcome. The theory acknowledges that in systemic or complex issues, like climate change, too many variables and decisions are involved at the consumer, producer, and government levels to assign blame to one success/failure point. This principle is both descriptive and prescriptive: It describes how accountability for systemic issues is shared, but it also prescribes that no one person or entity should shoulder the burden of global issues (“Distributed Responsibility.” Sustainability Directory, 1 Apr. 2025).
Culpability illustrates arguments on both sides of this principle through its discussion of AI, which is increasingly integrated into human life. The self-driving minivan collision acts as a case study for assigning blame in situations involving both humans and AI—neither can be blamed as the sole culprit, but one must take legal responsibility so that the Drummonds can have justice. The text illustrates that neither Charlie nor the SensTrek minivan can entirely shoulder the blame for the crash. A complex string of decisions was made by Charlie and his family in the cabin of the van prior to the crash that produced the circumstances wherein a crash could occur. No one action by a family member in isolation produced the crash, but their combination led to its dangerous conditions. Similarly, the AI, a non-sentient, unfeeling machine, made its decisions and disabled its auto-drive based on algorithmic calculations developed and programmed by humans long before the Cassidy-Shaws purchased the machine and drove it on the highway to Delaware. The text posits that societal systems of accountability—both legal and social—will have to adapt to the world of AI, acknowledging that its usage complicates the easy designation of blame.
Lorelei’s fictional book, Silicon Souls: On the Culpability of Artificial Minds, examines the prescriptive side of this philosophy: that people should be aware of how AI technology is produced and consumed so that society as a whole can take responsibility for how it functions in our world. Lorelei believes that AI can make the world safer, but it’s both the responsibility of the producer to create morally good, trustworthy AI and the responsibility of the consumer to not over-rely on these systems to account for their own poor behavior. She argues that this collaborative accountability will ensure that AI can operate as it’s intended to: improving human life, not complicating it. Lorelei struggles with this on the individual level throughout the text: She feels like the only person who fully understands the consequences of AI technology, so she takes on the weight, by herself, of working to ensure that AI is programmed with morality. She learns, however, that in this world she envisions, society needs to be just as aware as she is, so she ends the text by writing Silicon Souls and sharing the burden of knowledge and responsibility with others.



Unlock all 68 pages of this Study Guide
Get in-depth, chapter-by-chapter summaries and analysis from our literary experts.