63 pages 2-hour read

Thinking Strategically: The Competitive Edge in Business, Politics, and Everyday Life

Nonfiction | Book | Adult | Published in 1991

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Part 2Chapter Summaries & Analyses

Part 2, Chapter 4 Summary: “Resolving the Prisoner’s Dilemma”

Sometimes players suddenly decide to cooperate, and sometimes that cooperation breaks down. Game theory suggests ways to predict when these changes will happen.


During the 1970s and 1980s, a group of oil-producing nations created a cartel, OPEC, that reduced overall production and raised oil prices. The cartel began to break down, though, when some countries broke faith and began over-producing in an example of the prisoner’s dilemma.


Each member that breaks such agreements increases its payout relative to the others but reduces the total payout to all. Cooperation would have given all of them more money, but the temptation to break away and make more becomes the dominant strategy. If everyone else is thinking of breaking the agreement, then each member needs to break away. When cooperation fails, everyone is worse off.


In politics, whichever person or group proposes to the public a necessary but painful solution to a major problem—a tax increase to pay for needed government spending, for example—that person or group pays a steep price in the next election, while everyone else benefits. The best strategy is to say nothing, but if all sides do so, the problem gets worse. In business, as with the OPEC situation, companies selling the same type of product will have trouble organizing a simultaneous hike in prices; this scenario benefits consumers.


Sometimes it’s hard to detect cheating within a price cartel: Outside factors, such as a fall in demand, a hidden discount or side benefit, or a change in costs can cause the price to drop and simulate a broken cartel. Trade agreements also can fail when participating countries find clever, non-monetary ways to protect their home industries, such as toughened inspections or complex quotas, which make trade competition more opaque.


Cooperation improves when cheaters know they’ll be punished. Criminals who implicate their fellows in return for a shorter sentence might be hunted by the gang’s agents inside or outside prison. An OPEC country that cheats can be shunned by other members and lose more than it gained from cheating. Punishing cheaters isn’t possible in a one-time game, but cooperation can be enforced when a game is continuous.


If, though, game play comes to an end at a specific time, players once again will be tempted to cheat at the final round, and the next-to-last round becomes the last one for cooperation, at which point every player will again be tempted to cheat, and so on until all cooperation “unwinds” and becomes impossible. Most cooperative games, from crime to price fixing, have no fixed end date, or the end date is very distant, so cooperating during early rounds benefits both honest and dishonest players.


Ironically, one way to enforce high prices is to issue a low-price guarantee: Competing retailers promise to match each other’s prices. This prevents them from luring customers away from their competitors with lower prices, which prevents prices from dropping at all, as in a price-fixing cartel. Competitors effectively cooperate on price, and they can detect “cheating” instantly when their customers present them with the competitor’s published lower price. In New York, discount stereo retailers Crazy Eddie and Newmark & Lewis publicly guaranteed their low prices, but such discount stores typically mark up prices 100% over wholesale.


The criteria for punishments should be clear and certain. A company that tries to detect cheating by looking for a drop in its own “discounted mean of profits” gets lost in complicated calculations (105). Enforcing trade agreements—as Europe tries to do with GATT, the General Agreement on Tariffs and Trade—can take years, and judgments often become politicized. Detection of cheaters is prone to error, and dire punishments to keep members in line can cause great damage. The solution is to find the smallest punishment that deters cheating.


A simple workaround resolves many of these problems. Called “tit-for-tat” (106), it’s a system of measured responses to a competing player that punishes cheating and rewards cooperation. If a player has the first move, that player always cooperates; thereafter—or if that player goes second—the tit-for-tat player always mimics the actions of the opponent. If the opponent cooperates, the tit-for-tat player cooperates; if the opponent cheats, the tit-for-tat player cheats.


Tit-for-tat is clear, nice, provocable (it punishes cheaters), and forgiving. In two 150-game computerized prisoner’s dilemma tournaments, the tit-for-tat strategy beat all other strategies. It never beats an individual opponent because the resulting cooperation generates equal scores, but its overall tournament score is highest.


In the real world, when someone retaliates, as in a diplomatic crisis or a personal feud, this can anger the other side, trigger more retaliation, and, in a tit-for-tat strategy, lead to a chain reaction of mutual punishments—a feud. If a player defects by accident or is mistakenly perceived to defect, this leads to an endless cycle of defection from which neither player can escape, unless one of them misperceives a cheat as a cooperation.


As the chance of misunderstandings rises, the odds increase that a tit-for-tat strategy will quickly enter a runaway cycle of defections. The result is that, in real-world situations, a tit-for-tat strategy will eventually spend as much time defecting as cooperating, no matter what the opponent does. If misunderstandings reach 50%, it no longer matters what a player does in tit-for-tat, or any strategy, as all responses do no better than a coin toss.


An improvement on tit-for-tat is to play cooperatively through several rounds and keep track of the number of defections by the other side. If, based on knowledge of the opponent’s previous record, the number of defections seems high—for example, two out of three, or three of five, or five in 100—then a switch to tit-for-tat will punish defectors effectively. If punishments cycle for 20 rounds, a player should return to cooperating and put the opponent on “probation.” If the opponent persistently breaks parole, indefinite tit-for-tat works well.


The chapter’s Case Study, “#4: Congress v. Federal Reserve” (115), presents the US Congress, in charge of fiscal policy (taxing and spending), in a 1981 tiff with the Federal Reserve, which is in charge of monetary policy (money supply, interest rates). Congress wants to increase spending while the Fed increases the money supply; the Fed, under chairman Paul Volcker, worries that this combination will cause price rises—inflation—a big problem only recently tamed. They compromise and agree that Congress will reduce spending and the Fed will increase the money supply.


This is a prisoner’s dilemma, since each side gives up its preferred outcome for a generally good result, but each side can defect and get its best outcome if the other side continues naively to cooperate. A workaround has Congress deciding the money supply and the Fed setting the tax-and-spending rate. 

Part 2, Chapter 5 Summary: “Strategic Moves”

Sometimes a “scorched earth” policy—destroying the value of the thing being fought over—prevents invasions, but sometimes it doesn’t. When Nazi Germany attacked Russia in 1941, Stalin had his people burn their farms and destroy infrastructure. Western Pacific tried to buy Houghton Mifflin publishers, and Houghton Mifflin’s many famous authors threatened to leave the company, making the purchase worthless; Western Pacific backed off.


When Rupert Murdoch bought New York magazine, though, its authors made good on their threat to leave, but Murdoch didn’t buy the magazine for their talents. Instead, he wanted the solid relationships with advertisers, so “The writers burned the wrong fields” (120). 


Scorched earth policies are examples of “strategic moves,” in which players promise, in certain situations, that they’ll not give in, even if it damages them.


If, for example, the US and Japan compete for a new TV standard, it’s more likely that one side will win with a big effort to develop the technology, but this will be expensive. The best outcome for either side is their own high effort against the other country’s low effort; the worst is if both sides put in high, costly efforts. One solution, for example, is that the US pre-empts Japan by announcing in advance its commitment to put in high effort.


If Japan capitulates and puts in low effort, the US can later do the same and save on costs. Japan, though, realizes this and responds with its own high effort. Thus, if the credibility of the US commitment is in doubt, its strategic move may fail. The solution is to support the commitment with a costly action that’s irreversible. The US might allocate funds ahead of time to support corporations that develop the new technology.


To have any effect, strategic moves must be announced in advance. “Response rules” are strategic moves that commit to certain actions if another player makes an unwanted move. Response moves take two forms, threats (to compel or deter) and promises (to reward cooperation). A mugger, for example, threatens to hurt you if you don’t hand over your wallet, or promises to stop beating you if you comply.


Strategic moves transform a simultaneous game into a sequential game: Either you make a promised move preemptively and await a response, or, if another player makes a certain move, you respond with your threatened move.


Unlikely to win a conventional war in Europe, NATO and its leader, the US, promise to use nuclear weapons. Though this benefits NATO members if they’re already under conventional attack, it would likely devastate the US, so it might renege. If the Soviets don’t believe the threat, they’re free to attack; if they believe it, they won’t. Either way, it’s in the US interest to make the threat.


Sometimes a strategic move makes no difference. During the 1981 budget battle in the US Congress, Republicans found that their dominant strategy was always to support President Reagan’s budget proposal. This either would give them their best outcome or avoid the worst outcome. Realizing this, and recognizing that all possible strategic moves—threats and promises—would have no effect on the Republicans, the Democrats opted to support Reagan, which gave them not their best outcome but a decent one. The authors believe they should, instead, have publicly promised support if the Republicans compromised, and threatened to attack if they didn’t.


It’s important to avoid over-commitment. Too big a threat may be hard to believe, and carrying out such a threat might do too much damage. Some would argue that big threats almost never need to be carried out, so there’s no damage, but such threats horrify other players and cause them to shun the threatener. Thus, threats always should be as small as possible.


In “Case Study #5: Boeing, Boeing, Gone?” (139), two airliner manufacturers, America’s Boeing and Europe’s Airbus, compete for sales in Europe and the US. Boeing brings its mid-range 727 to market first, and if Airbus tries to compete with its new 320 series, profit won’t be enough to cover development costs. Airbus wants European regulators to restrict sales of Boeing jets so that Airbus can make a profit.


If Europe restricts airliners to those made by Airbus and the US keeps its market open to both airliners, Airbus will make enough profit to cover its development costs. However, European consumers benefit when both companies compete. Meanwhile, if the US retaliates and closes its market to Airbus, Boeing will make much more money, and Airbus will fail to profit. Under all these conditions, Airbus will decide not to develop the 320. 

Part 2, Chapter 6 Summary: “Credible Commitments”

Strategic moves fail if the promise to act isn’t believable. The authors present an “‘eightfold path’ to credibility” (144) that demonstrates a player’s seriousness. These eight techniques fall under three principles: changing payoffs, cutting off retreat, and teaming up with others.


The first approach is “Reputation,” which establishes trust in the player’s word. Israel, for example, publicly refuses to negotiate with terrorists; this is costly but helps deter hostage-taking: Any deviation, and more hostages will be taken. Another approach is to negotiate, then attack the terrorists: This ruins a country’s reputation for negotiating, effectively taking that option off the table. A reputation for being unreliable makes threats of violence credible; likewise, a reputation for honesty shows a commitment to the truth.


The second approach is “Contracts.” A contract with steep incentives and penalties makes a commitment more credible. A neutral party can enforce the contract; the guarantee is that, if they permit the contract to be rewritten, their reputation will be ruined, and no one will trust or employ them.


The third approach is “Cutting Off Communication” (151). Mailing a letter makes the letter’s contents irrevocable; the sender can no longer deny making the statements therein. Refusing to communicate further guarantees that the player won’t renegotiate the terms of an agreement; the drawback is that the cut-off communicator will have trouble enforcing any agreement.


The fourth technique is “Burning Bridges”—cutting off any retreat. William the Conqueror’s army in England, and Cortéz’s men in Mexico, burned their ships and thus had to complete their invasions or die trying. Opponents knew they’d fight hard, so the defenders tended to retreat. A business that produces only one major product will defend its turf ferociously, as did Polaroid when Kodak tried to enter the instant-photography market; Polaroid won its patent-infringement case. The East German government knocked down the Berlin Wall, an irrevocable demonstration of its commitment to reform.


A fifth approach is “Leaving the Outcome beyond Your Control” (155): A nuclear power, for example, might promise to retaliate automatically if another country attacks. With no negotiation possible during an attack, the guaranteed response is more likely to dissuade an aggressor. Somewhat better is “brinkmanship,” in which the situation might get out of control; this reduces the chance that an error would destroy both players, while leaving in place the threat of disaster as a deterrent.


One form of brinkmanship involves “an offer you can’t refuse” (163), where, for example, a job applicant, or perhaps a car buyer, must act immediately or the offer on the table will be taken away. The argument is that conditions might be very different tomorrow. The weakness of this approach is that the threat’s credibility is low, and competitors might improve their own offers.


The sixth procedure is “Moving in Steps” (157), which involves breaking a commitment into many small parts, each too small to betray at the expense of the overall, valuable agreement. One example is progress payments paid regularly during the life of a construction job.


The seventh approach is “Teamwork” (158). A group sometimes can meet commitments better than an individual by using peer pressure. West Point punishes cheating with expulsion, and failure to report a cheater also gets a student expelled. Groups train members to cooperate: Armies use pain during boot camp to train soldiers to strict obedience; they bond troops by emphasizing glorious past victories, which instills pride.


The eighth technique is “Mandated Negotiating Agents” (160). Outside parties generally have less incentive to back down during negotiations; this strengthens the position of the client they represent. A union leader can talk tough with management, in part because that leader can’t force the union to accept a deal, and in part because that leader could be replaced for negotiating poorly.


The chapter’s Case Study is “#6: Would You Rather Rent a Computer from IBM?” (165) In United States v. IBM, the government argued that IBM’s practice of leasing its mainframe computers prevented other companies from competing. IBM argued that its rentals were a better deal for most customers, who were protected from buying a machine that might soon become obsolete. The question is what would happen to pricing if IBM mainly sold, instead of rented, its mainframes.


Short-term rental of a computer is a low-commitment purchase, and renters are willing to pay a higher price for the privilege. Buying an entire mainframe computer, on the other hand, is a major purchase, and customers wait for discounts before buying. IBM thus earns more by renting than by selling. 

Part 2, Chapter 7 Summary: “Unpredictability”

Systematic strategies are predictable, and other players soon learn to read and counter them. Thus, strategies should select randomly from a suite of possible moves. Baseball pitchers, for example, should choose randomly from their set of good pitches on each pitch. This makes them unpredictable.


In tennis, servers vary their aim toward the receiver’s forehand or backhand. Depending on the receiver’s skill at returning the two types of serves, the server adjusts the percentage of each type. The ideal percentage, which gets the fewest successful returns by the receiver, varies with each opponent. Likewise, the receiver adjusts the percentage of time she anticipates that the serve will be, say, to her forehand until that proportion gives her the highest number of successful returns. The point for both, ironically, is to be unpredictable but at a predictable rate.


In a game such as tennis, where one player’s gain is the other’s loss, the “min-max theorem” states that the minimum gain for one side equals the maximum gain for the other, and both sides tend to arrive at the same percentages (178). In games where the sum of gains isn’t zero, or with multiple players or multiple strategies, the min-max theorem breaks down, and more complex calculations come into play. In all cases, a player’s goal is to randomize moves enough to prevent an opponent from correctly predicting the next move.


Sometimes a player mixes plays poorly on purpose, and competitors begin to predict the behavior, but this is a set-up to lull opponents into overestimating their chances and walking into a late-game trap when the “poor” player suddenly improves. In general, though, using the best mix of randomized plays forces the opponent to do the same; in the long run, any deviation worsens the results.


Raising skills alters percentages. If a tennis player improves his backhand, this forces opponents to serve more to his stronger forehand.


Players must randomize their moves during the heat of battle. To randomize the choice between fastballs and curve balls, pitchers can glance at their watches: If, for example, the mix between the two pitches should be 40% fastball and 60% curve ball, the pitcher chooses fastball if the second hand lies between 1 and 24 and curve ball if the second hand lies between 25 and 60.


In armed conflict, because spies can learn an enemy’s plans, generals should randomize their attack patterns, waiting until the last moment to make the random choice.


Sometimes bluffing works, as in poker, but a player who bluffs too much trains opponents to ignore the bluffs, and a player who never bluffs trains opponents to bow out when that player has a good hand. Neither approach wins big pots; it’s better to mix bluffs and truth.


When two people’s interests are aligned, sometimes their best efforts can result in failure. In the O Henry story The Gift of the Magi, at Christmas Jim sells his precious pocket watch to buy Della a gift of combs for her beautiful hair, but Della sells her hair to buy a fob for Jim’s watch. They each get a gift they can’t use. The equilibrium strategy would be for them to toss a coin, and one of them gets a gift, but this ruins the surprise. The real gift lies in the failure of their efforts, since it shows “their love for each other” (190).


Athletes randomize their plays, but businesses do so at their peril, since companies don’t like to leave things to chance. If Coke offers a discount coupon and Pepsi happens to choose the same week to do the same, both companies fail to gain any of the other’s customers and instead must give discounts to their own customers. They end up somewhat like Della and Jim, sacrificing for nothing.


Enforcement of rules can be randomized efficiently. Cities encourage compliance with parking meters by spot-checking the meters and charging high fines for misuse. Randomized drug testing works similarly—the penalty can be getting fired—but random US tax audits aren’t as effective because penalties are too small.


Scofflaws sometimes hide their crimes in a cloud of false alarms, overwhelming the police’s resources, in the same way that military attackers hide a real missile among dozens of fake ones, forcing defenders to shoot down every single missile.


Chapter 7’s Case Study, “Operation Overlord” (195), examines the Allies’ choice of attack point during D-Day in World War II. The two landing zones are Calais, easy to defend but closer to the Allied objective in Germany, and Normandy, harder to defend but farther away. A table can be drawn that shows the relative merits of each landing place: Normandy is the favorite. The Germans were misled into believing the Allies would instead land at Calais. Mistakes, bad luck, and confusion sown by an unreliable double agent slowed the German response even after the Allies landed at Normandy, and the Allies won the day. 

Part 2, Epilogue Summary

The authors credit Princeton’s John von Neumann as the pioneer of game theory; John Nash and Thomas Schelling expanded his work. For further reading, the authors suggest several books, including Game Theory: A Nontechnical Introduction by Morton Davis; Game Theory and Politics by Steven Brams; and The Art and Science of Negotiation by Howard Raiffa (200).


Many of Part 2’s concepts have been simplified, especially by reducing the number of possible strategies in a given game to two: “This was done when the most basic ideas could be conveyed without serious loss of content” (200).


Part 3 will focus on common situations and how game theory applies to them: “These include bargaining, voting, brinkmanship” (202). 

Part 2 Analysis

Part 2 looks at how to resolve situations that arise commonly in various types of games. These include prisoner’s dilemma issues, the problem of trust, strategic moves, and how to randomize tactics to keep an opponent off-base.


Chapter 4 discusses how OPEC nations cooperated to restrict production and raise prices, and how that cooperation failed because it was subject to the hazards of prisoner’s dilemma games. OPEC’s decision to control the price of oil worked brilliantly for a time because the world depended on oil to run its cars and other machines. Customers usually react to a price hike by choosing a cheaper product, but, especially during the 1970s and 1980s, there was no easy substitute for oil.

 

For this reason, demand for oil at first remained pretty much the same, whatever the price. Given enough time, though, customers reduced their need for oil by making more efficient use of it—cutting mileage, buying gas-efficient cars, and the like. But for a few years, OPEC could rake in fortunes simply by keeping production low and prices high.


Regardless of the opportunities, price cartels are always subject to the effects of the prisoner’s dilemma, since cartels are a form of that game, and eventually OPEC’s price strategy collapsed.


A more recent example of the problems with price fixing comes from the 2021 meeting of the G-7, the nations with the largest economies. They agreed to fix the minimum corporate tax rate at 15%: If any participating nation drops its rate below that to attract business, the other members simply tack on the difference so that those companies end up taxed at 15%. Effectively, the G-7 established a price-fixing cartel; they became the OPEC of taxation. At first blush, this seems to solve the problem of enforcement: After all, tax rates are public, and cheaters are known at once.


However, corporations and defecting nations have workarounds: They can redefine net income, increase deductions, use no-cash barter trades, launder payments, and so on. As well, in 2021 the average corporate tax rate stood at 24%, so the 15% rule sets a minimum tax; it’s not an increase. Odder still, tax rates have fallen almost in half over the past 40 years and may drop further still—large-scale world trade involves tax competition between nations—and the 15% rate may become a price floor. That, perhaps, is the main purpose of the agreement, to prevent tax rates from eroding altogether. (Tyler Cowen. "The new proposal on corporate tax synchronization.” Marginal Revolution, 5 June 2021, https://marginalrevolution.com/marginalrevolution/2021/06/the-new-proposal-on-corporate-tax-synchronization.html.)


Cartels, then, are hard to enforce, no matter how clever they may seem at first. The prisoner’s dilemma remains one of the hardest games to solve.


Prisoner’s dilemmas occur, not only in business and politics, but in everyday social life. A potluck party will draw attendees who fail to bring food and free-ride on the contributions of others. Marriage, too, involves a prisoner’s dilemma: Spouses promise to be loyal to each other, but cheating while the partner remains loyal is an outcome too tempting to resist for some.


A powerful strategy for beating the prisoner’s dilemma is the tit-for-tat system, which always does what the opponent does, but which can lead to a runaway cycle of mutual punishments. The strategy can be modified for tit-for-tat players caught in such a cycle: After several rounds of defecting, the cycle automatically triggers a cooperative overture on the next player’s move; this opens up the possibility for a responding cooperative response that breaks the logjam. In the real world, “punishment” can take many forms, one of which is to ignore the opponent’s last move. This probably will generate a query from the opponent, which the first player can treat as friendly and respond in kind. This tactic breaks a cycle of tit-for-tat feuding.


Chapter 4’s Case Study, in which the 1981 US Congress agreed to reduce spending and the Fed agreed to increase the money supply, created a prisoner’s dilemma where both sides could defect. The authors’ solution is to have each side manage the other side’s decision: Congress sets the money target while the Fed sets the spending target. This is a version of the “I cut, you choose” system of dividing up a cake. It forces the two players to reach a 50-50 split: If one side gets too carried away, the other can retaliate. In real life, neither Congress nor the Fed would be in a hurry to set a precedent where the other institution has control over it. Still, the authors’ solution would likely prove useful in many other, similar situations where the two parties involved are less tradition-bound or dug-in to their positions.


Thinking Strategically was written well before Bitcoin and the theory of the blockchain took the financial world by storm during the early 2000s. The blockchain, which handles trades of Bitcoin and other cryptocurrencies, is nearly impossible to cheat. Its rules require simultaneous, unanimous agreement from a large number of computers, whose consent is effectively unfakeable. It becomes a trusted platform on which anyone—friends and enemies, credible partners and unreliable ones—can carry out trades, sign “smart” contracts, and keep track of invoices and receipts with automated payment ledgers. It’s already being adapted to the needs of banks, construction contractors, and the general public. Blockchains go a long way toward resolving uncertainties and suspicions between partners when Prisoner’s dilemma problems crop up between them.


The authors argue that, in political battles, each side should choose its dominant strategy, regardless of whether they come out better or worse than their opponent. This is true if both sides ignore optics, or how things appear to the voters, but it’s nearly impossible for politicians to resist a public triumph when they can get one. Some options are, indeed, worse for their long-term strategy but provide bragging rights, as: “We beat them! We won!” which can sound compelling to their constituents. Voters are unlikely to support a team that can’t rack up obvious wins, and, since the complexities of political games aren’t always visible to outsiders, voters will tend to lean toward those who perceptively triumph over the other side.


The authors mention appearing irrational as a rational strategy: “Someone who has a reputation for being crazy can make successful threats that would be incredible coming from a saner and cooler person” (148). During the late 1900s and early 2000s, the leaders of North Korea—a country wedged between China and US-supported South Korea—increased their leverage by adopting eccentric, even “loony” personalities. From time to time, they’d whip up war fever among their population, as if preparing for conflict. Both China and the US had to treat North Korea with kid gloves for fear they might suddenly attack and destroy South Korea’s capital, Seoul, or hurl nuclear weapons in all directions. Thus, for decades, North Korea kept outsiders at bay by acting irrational; there was method in their madness.


Case Study #5 shows that, under certain market conditions, Airbus’s 320 series of airliners wouldn’t ever be developed. In fact, the A320 family of planes entered the market in 1988 and went on to overtake Boeing’s 737 jet as the all-time best-selling airliner. The protectionism hypothesized, for purposes of discussion, didn’t happen in real life, and US airlines own fleets of 320s.


Case Study #6 involves IBM renting its mainframes and whether it thereby created an illegal monopoly. The US Justice Department filed its charges in 1969, and the case finally ended in 1982, when the US dropped its prosecution. Millions of dollars were spent on the case, but by 1982 computers had vastly improved and the marketplace was quite different; the problem had become moot. Today, IBM focuses on cloud computing, research, and computer chip development. The cloud computing service is a form of rental: cheap per month but lucrative in the long term.


The authors argue that even a monopoly corporation has a competitor: its future self. When a manufacturer has made most of its first sales on a new product, it will lower prices to attract more sales. This causes many customers to wait for even better sales prices down the line, causing sales to drag. Such a dilemma also occurs when a company announces that a new version of an older product will ship soon, and buyers stall their purchases of older products until the new item arrives. This has been a perennial problem for the Apple iPhone.


In the Epilogue to Part 2, the authors credit John von Neumann, one of history’s greatest mathematicians, with launching the formal study of game theory. His discoveries also made important contributions to many other fields, especially quantum mechanics and computer theory. Virtually every great scientist who worked with him has said they couldn’t keep up with his remarkably quick and accurate mind. He was a genius among geniuses.


The authors also cite John Nash, another brilliant mathematician whose work on game theory earned him the Nobel Prize in economics. Ironically, he also won the von Neumann prize for his work on equilibria in games, now called Nash Equilibria. Nash was the subject of the Oscar-winning film A Beautiful Mind.


Thomas Schelling gets many mentions throughout the book, principally for his theory of brinkmanship (see Chapter 8, below) and his idea about focal points, places where people converge to meet even if they haven’t agreed to meet there. Grand Central Station, the most common answer to questions about where to meet in New York City if you’ve gotten separated from a companion, is an example of a focal point; today, they’re called Schelling points. If finding each other were a game, a Schelling point would be the equilibrium on which the game settles. 

blurred text
blurred text
blurred text

Unlock all 63 pages of this Study Guide

Get in-depth, chapter-by-chapter summaries and analysis from our literary experts.

  • Grasp challenging concepts with clear, comprehensive explanations
  • Revisit key plot points and ideas without rereading the book
  • Share impressive insights in classes and book clubs