55 pages • 1 hour read
Philip E. Tetlock, Dan GardnerA modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
In 2011, Philip Tetlock founded a multiyear forecasting study called the Good Judgment Project with his partner Barbara Mellers. The GJP is a volunteer-run study that recruits people to forecast the future. The study includes superforecasters like Bill Flack, a Nebraska retiree who consistently makes accurate predictions in areas far outside his expertise, such as the chance of Russia annexing Ukrainian territory in the ensuing three months. After recruiting a cumulative 20,000 volunteers from various walks of life, Tetlock and his team concluded that it is not what people think that is important, nor their professional expertise, but how they think. Superforecasters are intellectually curious and willing to do their research. They also consider conflicting arguments and update pre-existing beliefs. Superforecasters like Bill make up 2% of the GJP’s volunteers, and they often outperform professional bodies in other institutions when it comes to making predictions. The GJP’s initiatives, including its Ten Commandments for Aspiring Superforecasters, improved accuracy in forecasting by 10%, which may seem modest but has a profound effect over time.
The authors argue that we live in a world beset with forecasts. On the news especially, many so-called authorities are called upon to make predictions, many of which are unfounded. Also, when their predictions fail to come true, few of the forecasters are held accountable, which perpetuates mediocrity in the sphere of prediction.
Tetlock’s research between the years of 1984 and 2004 showed that the average expert made predictions of similar accuracy to those of a “dart-throwing chimpanzee” (4). Thus, expert predictions often fare no better than random guesses. Nevertheless, Tetlock was opposed to the notions of nihilists who used his research to argue that expertise and predictions alike were useless. Tetlock, for his part, believes it is possible to see into the future and that many people can nurture the required skills for doing so.
The authors draw upon American meteorologist Edward Lorenz’s 1972 theory of the butterfly effect to illustrate why the future is difficult to predict. Lorenz theorized that the flap of a single butterfly’s wings in Brazil could change the movement of the air enough to provoke a tornado in Texas. He drew attention to the importance of investigating the minute differences in data entry regarding weather patterns, which could result in radically different long-term forecasts. Ironically, while today’s scientists have more information than ever, they are less confident about the prospect of making perfect predictions.
Still, the authors argue that predictions are paramount to the functioning of daily life, including tasks as mundane as deciding when to commute to work and shopping on Amazon, a platform that automatically presents a variety of options (predictions) based on previous purchases.
The authors also highlight the importance of revision within prediction. Analyzing statistics and learning from the past are imperative aspects of this science. While the authors acknowledge that computer algorithms can aid in mitigating human biases, they propose that algorithms should supplement human judgment rather than replace it. This is because although computers can draw up relevant information, they cannot make informed guesses about human intentions, which in turn impact the future.
This chapter explores the mystery of why, in 1956, Archie Cochrane, a pioneer of 20th-century statistical testing in the field of medicine, was so quick to accept a specialist surgeon’s unfounded pronouncement that Cochrane had terminal cancer. In the end, a pathologist who examined the biopsy found that Cochrane did not have cancer at all. The authors consider that the error was made both because Cochrane failed to doubt the specialist and because the specialist failed to doubt his own judgment. They argue that all humans fall into such cognitive errors when we have “been too quick to make up our minds and too slow to change them” (25).
Such cognitive errors dominated medicine for centuries as doctors drew upon guesswork and their experiences of what had previously worked. While ignorance and overconfidence were defining features of medical treatments, patients were often better off waiting out their illnesses than seeking professional advice. The 20th-century innovation of randomized controlled trials, which involved careful measurement and favored using large samples, paved the path for greater accuracy. The physicist Richard Feynman coined the idea that doubt, as opposed to overconfidence, would be what advanced science; scientists must be willing to challenge the hypotheses that seemed so obvious to them. This idea impacted medicine too.
Ironically, Cochrane, the very man who fell for his surgeon’s misdiagnosis, was a key proponent of randomized controlled trials in the 1950s, as he complained that too many healthcare decisions were made without adequate scientific validation. He observed that physicians were stuck in the illusion of their own expertise and were lulled into complacence about their longstanding methodologies. However, when it came to his own treatment, he took his surgeon’s claims for granted.
The authors’ hunch is that Cochrane fell victim to “System 1 thinking” before he could access the more deliberative mode of “System 2 thinking,” concepts originating with psychologist Daniel Kahneman: While System 2 is focused, conscious, reflective thought—the kind that might be used in complex problem-solving—System 1 is the automatic process that delivers hasty, though often inaccurate, conclusions. System 1 thinking is a product of human evolution, as Paleolithic hunter-gatherers had no time to assess whether a shadow on the grass might be a lion and had to make a snap judgment, often based on previous experience.
System 1 thinking also emerges when people are uncomfortable with doubt and uncertainty; however, the quick leap from uncertainty to a satisfying conclusion often produces error. This was the case during the 2011 bombing in Oslo, Norway, when early speculators automatically inculpated Islamic terrorist groups that had previously attacked other European cities. In fact, the Oslo bombing was the brainchild of Anders Breivik, an anti-Islamist and opponent of multiculturalism. The thinking errors that produced the inaccurate conclusion were a result of what psychologists term “confirmation bias,” in which speculators failed either to seek out or to consider information that might contradict their initial explanation; humans naturally seek out evidence that confirms their preexisting beliefs, hence the term confirmation bias. While “this is a poor way to build an accurate mental model of a complicated world, […] it’s a superb way to satisfy the brain’s desire for order because it yields tidy explanations with no loose ends” (39).
Meditating further on Cochrane’s mistake regarding his diagnosis, the authors conclude that he employed the ineffective psychological strategy of “bait and switch,” whereby he substituted the harder question of whether he had cancer with the easier question of whether the specialist would be the sort of person who would know whether he had cancer (39). Cochrane’s mistake shows that even experts are susceptible to cognitive errors.
The authors maintain that intuition can be right or wrong in equal measure. Indeed, what seems like intuition on the part of experts is often simply pattern recognition formed over time. Like the 20th-century pioneers of randomized medical trials, all humans need to honor the fact that we know far less than we think we do about the future.
The authors open their book with an effort to make its core ideas feel more accessible and relevant to average readers. The topic of forecasting might seem intimidating and elitist, but the authors show how prediction is a key component of daily human life. Using the case study of a fictitious woman in Kansas, they demonstrate the ubiquity of predictions, from the ones she makes daily about choosing the optimal time for her commute, to the ones made by others in distant offices that nevertheless impact her life. Such predictions include the items that companies like Amazon think will interest her based on her previous internet searches as well as how forecasts of ensuing conflict in the North African country of Tunisia might lead her air force navigator husband to risk his life, “dodging antiaircraft fire over Tripoli” (10). Here, the authors argue that predictions and forecasts enter almost every sphere of human life, meaning that it is worth investing in their accuracy.
To clarify the text’s importance and emphasize the subject matter’s urgency, these early chapters paint a picture of contemporary culture, specifically as it involves potentially incompetent authority figures in the world of forecasting. The authors show that while most people are confident enough forecasting mundane events such as road traffic, when it comes to predictions about global events, they feel out of their depth—especially in areas where they do not have expertise—and therefore delegate forecasts to those with impressive credentials. While Tetlock’s research has shown that such experts typically have about a 50/50 chance of being right, this lack of accuracy has done little to damage their careers. The authors argue that this is because experts’ charisma and eloquence distract from their inaccuracy and because the general public is happy to be entertained and swayed by interesting experts. Superforecasting, then, is not simply an effort in scientific research—it is a project of societal transformation. The authors seek to educate readers and, ultimately, to oust any ersatz forecasters from their positions of cultural authority.
Here, the theme of Forecasting: Between Science and Art becomes relevant, as the authors make a case for ending what they view as the charlatanism that sees most forecasters treating their predictions as individual works of art to be treasured. Tetlock and Gardner call for replacing current forecasting practice with a more scientific process akin to the evidence-based medical testing revolution of the 20th century. The authors argue for a more process-led approach that instead lends authority to people who are genuinely capable of making the best predictions (e.g., superforecaster Bill Flack rather than media legend Tom Friedman). Further, by insisting that forecasting is a form of science, the authors challenge those who maintain that forecasting is too unpredictable and uncontrolled to be a worthwhile investment. If forecasting were viewed as a science, forecasters would be encouraged to remain humble, knowing that they can rely on tested methods and that there is always room for improvement.
Forecasting and the Crucial Ingredient of Doubt also emerges as a theme in the first two chapters. While the authors argue that forecasting is a scientific work in progress, they also advise that forecasters adopt a scientist’s mindset of interrogating their own suppositions and welcoming evidence that contradicts their beliefs. This approach helps prevent the System-1-thinking error of getting carried away with initial assumptions.
Ironically, though the book as a whole highlights individual capacity for superior judgment, it also shows that mistakes are part of the human experience. This emphasizes the scientific mental processes of testing more so than the individual expertise, thereby encouraging readers that they can learn to make better predictions by developing specific skills and mindsets.
Business & Economics
View Collection
Canadian Literature
View Collection
Common Reads: Freshman Year Reading
View Collection
New York Times Best Sellers
View Collection
Politics & Government
View Collection
Psychology
View Collection
Science & Nature
View Collection
Self-Help Books
View Collection
Teams & Gangs
View Collection
The Best of "Best Book" Lists
View Collection