58 pages • 1-hour read
Michio KakuA modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
Kaku’s interest in science began in childhood, when he was fascinated by the futuristic technology of science fiction and inspired by learning about Albert Einstein’s unfinished work. In high school, he was awarded a scholarship to study physics at Harvard University after displaying his homemade particle accelerator at the National Science Fair. As a career physicist and one of the pioneers of string field theory (SFT), he now works on completing Einstein’s Theory of Everything (see: Index of Terms), which could be the key to definitively identifying the limits of possibility.
Investigating supposedly impossible technologies and phenomena has long been an important part of the process of scientific advancement. The quest to create an impossible perpetual motion machine (see: Part 3, Chapter 14) led to the founding of the field of thermodynamics, while the science fiction work The World Set Free (1914) by H. G. Wells (1866-1946) inspired the inventors of the first atomic bomb. Stephen Hawking (1942-2018) advanced our modern understanding of space-time by trying (unsuccessfully) to prove that time travel was mathematically impossible (see: Part 2, Chapter 12). As scientists develop a better understanding of the laws of physics, many things that were once thought impossible become reality. The scientific community once mocked Robert Goddard (1882-1945) for his pioneering work on rockets, only to experience lethal refutation via devastating bombardments from German V-2 rockets during World War II. Kaku himself was taught in school that both plate tectonics and the theory that a meteor wiped out the dinosaurs were nonsense, but both have since been proven true.
Kaku considers it meaningful to divide so-called “impossibilities” into three categories. The first category, Class 1, covers technology that is currently impossible to produce but that is scientifically feasible and currently being developed in some form. Although future technology is difficult to predict, Jules Verne (1828-1905) proved in his startlingly accurate Paris in the 20th Century (written in 1863) that a sound understanding of science can provide a sound guideline. As physicists better understand the laws of the universe, they can more accurately distinguish between what is impossible and what is merely difficult or improbable. Kaku believes that Class 1 impossibilities will likely become possible within a few decades or centuries of his writing in 2008. Class 2 impossibilities lie at the edge of current scientific understanding but might be achievable for civilizations that are millennia more advanced than humanity. Class 3 impossibilities are the technologies that appear to be truly impossible because they violate known laws of physics. They would only be feasible if a fundamental shift occurred in our current understanding of how the universe works.
Force fields are thin, invisible, impenetrable shields that can be raised or lowered at a moment’s notice. Sci-fi works such as Star Trek commonly depict force fields, which would undoubtedly revolutionize all areas of modern civilization (from warfare to construction to environmental management) were they invented today. The idea of force fields comes from the work of 19th-century British scientist Michael Faraday (1791-1867). A self-made man of humble origins, Faraday is best known for his discovery of electromagnetic induction and his drawings showing how lines of electrical and magnetic force work across space.
The universe has four fundamental forces—gravity, electromagnetism (EM), weak nuclear forces, and strong nuclear forces—but none of these is particularly suitable to harness as a sci-fi-style force field. Gravity is attractive and extremely weak, while weak nuclear force is simply radioactive decay, and strong nuclear force is too tightly bound to the properties of an atomic nucleus and only works on a very short range. EM force is easily neutralized via insulators and cannot be easily focused to a plane. A force field would therefore have to be composed of more than pure force.
A plasma window is a gas of ionized atoms (plasma) that electric and magnetic fields mold into a thin sheet. Plasma windows are currently used to isolate a vacuum from the air in industrial welding processes, but a hotter, more powerful version could be used as a shield capable of vaporizing projectiles. This could be stacked alongside additional technologies such as a woven lattice of carbon nanotubes and a curtain of laser beams to create an invisible and impenetrable defense similar to a sci-fi force field. Another layer implementing advanced photochromatic technology beyond our current capabilities (perhaps constructed using nanotechnology) would be needed to deflect laser beams.
In addition to their defensive capabilities, force fields in science fiction are used as platforms to defy gravity. Magnetism can mimic this effect, since like magnetic poles repel each other. This technology is already in use to lift heavy weights without friction (for instance, in magnetic levitation, or maglev, trains), although powering and stabilizing such systems is expensive. A better system could use superconductors, which are materials in which electrical resistance falls abruptly to zero when the material reaches a critical temperature. A common property of superconductivity is the Meissner effect, which results in the material repelling any magnetic force exerted on it. At the time that Kaku was writing this book, the critical temperature of all superconductors was extremely low (or in rare cases extremely high), making them prohibitively expensive for general use. Research into identifying a superconductor that can reach a critical temperature through cooling with liquid nitrogen is ongoing, but the discovery of a room-temperature superconductor would enable levitation to become an inexpensive, everyday technology. Ultimately, it seems likely that force field technology will be available in a modified form within a century or so.
Invisibility, one of the oldest and most widespread feats in mythology and fiction, features in innumerable science fiction works. Although physicists have long dismissed invisibility as impossible according to the laws of optics, recent advances in “metamaterials” have led many scientists to reevaluate that stance.
Modern understanding of optics originates from the discoveries of Scottish mathematician James Clerk Maxwell (1831-1879). He produced eight differential equations expressing Faraday’s force fields, showing how electric fields can turn into magnetic fields and vice versa. In 1864, Maxwell speculated that light was an electromagnetic disturbance (one of the biggest scientific breakthroughs of all time). Together with atomic theory, Maxwell’s theory of light explains why some objects are opaque (because their molecules are too close together for light to pass through), while others, like gases and crystals, are translucent or transparent and can be seen through (because the spaces between their molecules are larger than the wavelength of light, or their lattice-like atomic structure allows light to pass through). Since invisibility is a property that occurs at an atomic level, it is extremely difficult to replicate by normal means.
Metamaterials are complex artificial substances (layered ceramics with embedded implants, for instance) and have optical properties that do not occur in nature. These substances manipulate their index of refraction, changing the speed and thus the angle of the electromagnetic waves passing through them. A negative index of refraction renders the object invisible. Scientists have already created objects invisible to microwaves, and although the shorter wavelengths of visible light add another challenge, requiring manipulation on an atomic level, research is progressing to overcome that challenge. Computer miniaturization already uses ultraviolet radiation to etch small components onto photosensitive wafers of silicon in a process called photolithography, and the same technology has been used to create a metamaterial that operates in the visible light spectrum. Further advances in the miniaturization of computer components, such as the development of photonic crystals that use light rather than electricity to process information, could likewise contribute to developing invisibility technology. Plasmonic technology aims to bend visible light on a nanoscale by compressing waves of loosely bound electrons called plasmons across the surface of metal.
Nanotechnology is central to developing invisibility because wavelengths of light must be manipulated on an atomic scale. Atomic machines are difficult but possible to make by using a scanning tunneling microscope to map out individual atoms and a probe to move molecules one at a time in a highly time-consuming process. More complicated atomic machines are in development as scientists study and emulate natural processes found within living cells. The development of invisibility technology is advancing rapidly, piggybacking on commercially funded research into post-silicon digital technology. Kaku believes that some form of practical invisibility will be produced within decades, although the first iteration of such a shield will likely be solid and opaque, since bending light in three dimensions would require meticulous stacks of silicon wafers with complex arrays of embedded nano implants.
A two-dimensional form of optical camouflage already exists, wherein video footage of the scene behind an object is projected onto a screen or cloak of reflective beads in front of the object. This illusion is convincing in stills but doesn’t move with the eye. Projected holograms could provide a convincing three-dimensional optical camouflage, but massive technological issues exist in creating a holographic camera capable of capturing at least 30 frames per second, software capable of storing and processing the information, and projection equipment to present the image. H. G. Wells’s classic sci-fi novel The Invisible Man (1897) proposes a more sophisticated but significantly less viable form of invisibility: The protagonist becomes invisible by entering a higher dimension.
Beam weapons feature in mythology and in sci-fi works such as Star Wars, and are feasible because of the limitless potential for the amount of energy they can concentrate into a beam of light. In ancient times, Archimedes reportedly focused sun rays onto the sails of enemy ships to set them on fire, and during World War II, Nazi scientists did experiments to try to develop weapons from beams of focused sound waves.
Before the quantum revolution of the early 20th century, even laser beams were considered impossible. Pioneering scientists such as Albert Einstein (1879-1955) and Max Planck (1858-1947) discovered that the photoelectric effect occurred because light moved in discrete quanta called “photons.” Combined with the atomic theory pioneered by Niels Bohr (1885-1962), which recognized the electron as a particle with wavelike properties, this allowed physicists to predict and influence the behavior of atoms and quantum particles. Lasers work by pumping energy and light into a special medium until they produce coherent photons with the same wavelength and energy signature. This occurs because the additional energy allows electrons to jump into outer atomic shells, making the material unstable so that when photons from the light collide with the electrons, they drop to lower-level shells and emit new photons. The resulting cascade of decaying atoms releases coherent photons in a laser beam. The first coherent radiation beams were masers made of microwaves.
Today, lasers are in common everyday use, and new lasers are discovered all the time. The main types are gas lasers, chemical lasers, semiconductor lasers, and dye lasers. Ray guns are a feasible technology, except for issues in maintaining the stability of the lasing material and the difficulty in creating a portable power pack capable of producing enough energy. Similarly, with a suitable power source, light sabers could be quite easily created from a telescoping plasma torch of superhot ionized gas. These are therefore Class 1 impossibilities.
A Death Star capable of destroying a planet would require a hugely powerful laser, possibly powered by a fusion machine that converts mass to energy by combining hydrogen atoms into helium nuclei. Projects are underway to attempt to harness the power of fusion by focusing lasers onto a pellet of hydrogen-rich lithium deuteride or onto a plasma of hydrogen gas confined in a magnetic field. Even the most powerful of these reactors is far from capable of providing the energy needed to power a Death Star.
Alternatively, a hydrogen bomb could be made powerful enough to destroy a planet. The mechanisms for a hydrogen bomb were developed by Kaku’s mentor Edward Teller (1908-2003) during the Cold War. He created X-ray lasers, which were self-destructing devices that produced a high-powered beam of X-rays by directing their release via a nuclear explosion through the lasing material of copper rods. Hydrogen bombs consist of a fusion bomb of uranium-255 surrounded by a container of lithium deuteride. As the fusion bomb explodes, its blast is preceded by a wave of X-rays that heats the lithium deuteride to the point of fusion, causing a second, much bigger explosion. This second explosion could then be focused on another piece of lithium deuteride, and so on, stacking the theoretically unlimited effects of increasingly powerful explosions. Thousands of such X-ray lasers would need to be fired simultaneously to create a planet-destroying beam.
Another option would be to harness a hypernova, an intermediate stage in a star’s transformation into a black hole that releases incredibly powerful jets of radiation from both poles. If the axis of a hypernova could be manipulated, then the beam could be aimed and could undoubtedly destroy a planet. Regardless of the methodology, the logistical issues of creating a beam weapon capable of destroying a planet make it a Class 2 impossibility.
Teleportation, or the ability to travel long distances instantly, has featured in religious and mythological stories since ancient times and was popularized as a staple science-fiction technology in the TV series Star Trek.
Quantum theory overturned many rules of conventional Newtonian physics, including the assumed impossibility of teleportation. Erwin Schrodinger (1887-1961) showed that electrons had wavelike properties, which Werner Heisenberg (1901-1976) explained as waves of probability that an electron would be found in any particular place at any particular time and codified into his uncertainty principle. The uncertainty principle states that it is impossible to know both the exact velocity and position of an electron at the same time, controversially introducing probability into physics. Teleportation occurs in nature on a quantum level, as in atoms sharing electrons in a molecule, but quantum teleportation is not generalizable to large objects because probabilistic motions on an atomic level average out. Key evidence supporting quantum theory and the role of probability was found through the Einstein-Podolsky-Rosen (EPR) thought experiment, which ironically strove to disprove the uncertainty principle. The experiment showed that it was possible to instantaneously transfer information between coherent electrons because “entangled” electrons vibrating in unison are connected and therefore can affect each other regardless of the distance between them. This enables the transfer of properties between the electrons, effectively teleporting atoms across space.
Much research into quantum teleportation piggybacks off developments in the far more lucratively funded and commercially viable field of quantum computers. Quantum computers have the potential to advance far beyond the physical limits of silicon digital technology because, unlike conventional computers, which calculate using a binary system of 0 and 1, quantum computers use qubit values between 0 and 1. They function by using lasers to manipulate the potential spins of atoms in a magnetic field, although maintaining coherence in large numbers of atoms is a challenge for all but the simplest calculations.
Likewise, maintaining the coherence needed to entangle large numbers of electrons is the most difficult aspect of using quantum teleportation on large objects. A Bose-Einstein Condensate (BEC) is a form of matter that functions like a giant super atom at temperatures approaching absolute zero, because all its atoms fall into their lowest energy state. BECs could be the key to unlocking teleportation on a macro level, although the fact that BECs are hard to produce and have odd properties has hampered efforts to do so. If a beam of matter is sent into a BEC, the atoms in the matter fall into their lowest energy state, releasing energy as light. This light, which carries all the quantum information of the beam of matter, can travel across a fiber optic cable to hit another BEC in a different location. This then converts the light back into the original matter beam, which has effectively been teleported. The teleportation of complex molecules is a Class 1 impossibility that should soon be possible, but teleporting complex organisms such as human beings is significantly less straightforward and is therefore a Class 2 impossibility.
Telepathy is the ability to read and control the minds of others. Countless claims throughout history describe people and animals who are innately capable of such feats, some of which have fooled scientists and the general public, but all of them have ultimately been proven false. Since the 19th century, the Society for Psychical Research has explored reports of telepathy or extrasensory perception (ESP) but never found a convincing case with replicable results under scientific conditions. During the Cold War, the CIA invested significant resources in a project called Star Gate, which aimed to weaponize telepathy and ESP, without any significant successes. Some humans, including gamblers, have extraordinary skills in reading minute facial cues to understand the thoughts of others, and machines exist that use lasers to track eye movements and pupil dilation, information that can tell where a person is looking and give insight into what emotions are elicited by the stimuli. Lie detectors are the simplest and most verifiable type of machine designed to interpret thoughts, but classic models that measure blood pressure are notoriously unreliable.
Tiny electronic signals in the brain transmit thoughts, but these signals are incredibly weak and complicated, and cannot be transferred to other people. Positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) scans can provide some information on brain activity by showing blood flow to different sections of the brain as thoughts occur. A lie detector using MRI scans to show the increased brain activity associated with lying has been developed, but it is unreliable and prone to giving false positives, particularly for people with anxiety or memory difficulties. A universal thought translator that could directly communicate the thoughts of one person into the mind of another would need to be capable of recording and interpreting the signals from each of the billions of neurons in the brain, a Class 2 impossibility.
Brains function more like neural networks than digital computers, constantly rewiring themselves as they learn new information and develop. Although different areas of the brain are associated with different functions, the signals associated with specific thoughts are not confined to any single region; instead, they bounce around the brain in wave patterns. fMRI machines can help detect and record the broad strokes of these patterns, providing some insight into the general outline of thoughts. Some researchers are attempting to compose a lexicon associating certain wave patterns with particular words, although progress is slow and painstaking. Other researchers have noted that particular patterns correspond to specific simple movements.
The large size and high cost of fMRI machines limit their usability as telepathic aids. Research is underway to discern the feasibility of replacing them with supersensitive atomic magnetometers capable of detecting minute changes in magnetic fields in the brain. This could be a gateway into the development of handheld brain scanners. Radio waves can be beamed into the brain to excite certain areas, and stimulating certain brain regions with electrons can affect a person’s state of mind. Researchers can already cause hallucinations, psychosis, and religious feelings in subjects. As more regions of the brain are mapped and understood in greater depth, more detailed information about thoughts may be discernible from brain scans. Neural networks could potentially help analyze large volumes of information and data from brain scans to recognize patterns in brain waves associated with different thoughts, stimuli, and movements. Rudimentary forms of machine-aided telepathy capable of affecting a person’s state of mind and emotions (and of reading the broad strokes of a person’s thoughts) are therefore a Class 1 impossibility.
This section establishes Kaku’s writing style, which carries throughout the rest of the book, ensuring that all parts and chapters flow without jarring tonal or mood shifts. He assumes an upbeat and informal tone far different from his writing style in academic papers and textbooks. The informality is characteristic of the popular science genre because works in this genre seek to appeal to a general audience, and an accessible style prevents the complex, even intimidating, subject matter from alienating readers. In addition, Kaku frequently uses rhetorical questions to engage readers, encourage speculation, and create an interactive, conversational atmosphere rather than one that is bland or didactic.
The book opens with a Prologue, in which Kaku introduces himself, the premise of the book, and some of his methodology. He quickly establishes himself as a scientific authority by discussing his childhood feat of creating a particle accelerator, by name-dropping (that his childhood hero was Albert Einstein and that his famous mentor was Edward Teller), and by alluding to his high-level education at Yale. At the same time, he humanizes himself and shows humility by describing his love of science fiction media and his working-class background. These autobiographical anecdotes add to Kaku’s personal brand as a science communicator while also creating a personal, human connection with readers. He introduces some of the many sci-fi technologies that he examines in later chapters, building tension and interest while providing a roadmap for the rest of the book. Additionally, he clarifies his classification system for impossibilities, explaining his reasoning and introducing shades of gray to the idea of what “impossible” means. This takedown of the notion of an absolute standard of impossibility is central to Kaku’s thematic presentation of The Expanding Limits of the Possible in Scientific Discovery. Reinforcing the primacy of this theme is his decision to make Class 1 impossibilities (that is, the technology most likely to become possible within the foreseeable future) the topic of Part 1. He describes fictional technologies that are not currently possible and that many still consider impossible, but are in active development.
Early in each chapter, Kaku provides background information on the history of the “impossible” ability or technology in mythology, popular culture, and literature. Often, stories have existed about the feat, like telepathy or invisibility, for as long as human civilization has existed. These stories have inspired modern media, particularly in the genre of science fiction, and have also inspired researchers and scientists throughout history to attempt to replicate the feat through technology. This thematically highlights The Role of Storytelling in Advancing Scientific Inquiry. Kaku describes the plot of several key pieces of science fiction media, recounting, for instance, a pivotal scene in a Star Wars movie to introduce the concept of the Death Star. This allows readers to get a better feel for the features and functions of the fictional technologies he discusses in the chapter, to better understand the goals and modifications of efforts to replicate them.
Kaku discusses efforts to replicate fictional technologies throughout history and in the present day, but he also explains the underlying scientific principles and theories relevant to the chapter’s topic. In doing so, he establishes and develops The Impact of Collective and Individual Scientific Achievements as a theme. He introduces several of the most influential pioneers of scientific thought, such as James Clerk Maxwell, alongside explanations of their main contributions to science. Kaku includes basic biographical information, humanizing their achievements and ensuring that readers stay engaged with the recitation of scientific history. In addition, identifying the faces of the creators of different theories aids in memory retention, helping readers more easily recall the relevant details of the numerous theories and principles that Kaku describes throughout the rest of the book. The human element is an important feature of popular science literature, making the book an entertaining narrative rather than a dry recitation of facts. This background information is also important because popular science does not assume a high level of preexisting specialist knowledge in its audience. Whereas scientific papers would simply include a reference, popular science presents all the necessary information in an easily digestible format.



Unlock all 58 pages of this Study Guide
Get in-depth, chapter-by-chapter summaries and analysis from our literary experts.