logo

43 pages 1 hour read

Steven D. Levitt, Stephen J. Dubner

Think Like a Freak

Nonfiction | Book | Adult | Published in 2014

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Chapters 6-8Chapter Summaries & Analyses

Chapter 6 Summary: “Like Giving Candy to a Baby”

Levitt and Dubner discuss the power of incentives, highlighting that they are important components of thinking like a Freak. The key is finding the right incentives—that is, those that actually work to shape people’s behavior as intended. They begin with an anecdote of one of them trying to potty train his young daughter. He devised a scheme to reward her with M&Ms if she did it correctly, and she soon learned the routine perfectly to get her reward. Before long, however, she also learned to game the system. She would use the toilet to pee only a little but hold the rest, then ask for her M&Ms; then she returned to pee a little more and demand more candy. Thus, the author learned a lesson about unintended consequences.

An obvious and common incentive is money. Levitt and Dubner even tie it to the rise in the rate of obesity in the United States in the last several decades: Highly processed fattening food has gotten cheaper, so people now eat more of it. To demonstrate how money can deter even behavior with a high degree of social conformity, they describe an incident in China in which the driver of a van struck and killed a young child in the street. The driver paused but then drove on after running over the child again. Because China lacks Good Samaritan laws (which legally protect people who make an effort to help others), he would have been responsible for potentially expensive care for an injured child, but he knew that if she died, his financial responsibility would be much less. (He later turned himself in and confessed.) This is an example of a “perverse financial incentive” that discourages moral behavior (108).

Next the authors examine other kinds of incentives. They emphasize the need to discover what actually makes people act, not what they say motivates them; the two are often not the same thing. In a study in California, one psychologist conducted a phone survey about reasons to conserve energy. Of a list of four reasons, the survey indicated financial, moral, and social reasons for conservation; the least important reason was that other people were doing it. The next part of the study involved placing flyers at houses exhorting people to save energy. Four categories of flyers were distributed randomly, each focusing on one of the four reasons used in the phone survey. Then the researchers looked at the utility records showing the energy use of those houses. The group that lowered their energy use the most was the group given flyers that focused on joining neighbors in conserving energy. This shows that people often say one thing and do another.

Although it’s natural for people to rely on moral persuasion, Levitt and Dubner conclude that “it’s important to figure out which incentives will actually work, not just what your moral compass tells you should work” (115). Too often, messages that rely on moral incentives actually increase the rate of the undesired behavior. This shows that people are complex and only a clear-eyed understanding of their psychological makeup can lead to changes in behavior.

The story of Brian Mullaney illustrates this well. He worked for an organization called Smile Train, which trained and equipped doctors in developing countries to perform surgery on children with cleft lips. This simple procedure was done routinely in the West but less so in developing countries, despite the profound effect it could have on someone’s life. Part of their success came from Mullaney’s innovation in fundraising. Instead of sending appeals to donors repeatedly, he gave potential donors the option to give once and not be contacted again. Donors got to choose how often they were contacted. With this strategy, the rate of first-time donations actually doubled, as did the average amount per donor. Smile Train didn’t lose long-term donors either, as only a third of those contacted opted out of future mailings.

The chapter ends with an example of how incentives can go wrong—in a much more significant way than the M&M anecdote. To reduce air pollution the government of Mexico City instituted a limitation on driving: Based on license plate numbers, drivers would have to leave their car at home one day a week. People skirted this by buying second cars to use on the day their primary car was garaged. Even worse, since the extra car was a financial burden, they often opted for older cars that were not only cheaper but also less fuel efficient.

Chapter 7 Summary: “What Do King Solomon and David Lee Roth Have in Common?”

This chapter introduces how Freaks use game theory to induce people to identify themselves. The authors give this a name in the form of a proverb: “Teach Your Garden to Weed Itself” (143). They use King Solomon, king of Israel in the Old Testament, and David Lee Roth, lead singer of the rock group Van Halen, as examples to show how game theory works. The story of King Solomon is well known: when two women claimed to be a baby’s mother, he ordered that the baby be cut in half with a sword and one half given to each woman. One woman objected, saying the other should get the child. Solomon then gave her the baby on the grounds that only the real mother would save the child’s life, even if that meant giving it up.

As for David Lee Roth, the authors describe the very detailed contract Van Halen had with each venue they played. Their show was elaborate and required a great deal of structural support and electric power to pull off. Since some of the venues were outdated, the band feared not getting what they needed to perform their act safely. Roth’s answer was an M&M clause: When the band checked in, there was to be a buffet of junk food waiting for them, including M&Ms—except for brown ones, which were to be removed. Most people thought this was just rock star hubris, but it was actually a clever scheme. If the band arrived and found brown M&Ms, they knew the people running the venue hadn’t paid attention to the details, likely including more important things like power requirements. It told them they needed to do a thorough check themselves to ensure everything was prepared correctly.

Both King Solomon and Van Halen had to somehow discover the guilty parties by enticing them to inadvertently identify themselves. The online shoe company Zappos created a version of this for hiring employees. In this case, the party to identify wasn’t so much guilty as simply unlikely to last in the job. When candidates successfully completed the hiring process and a couple weeks of the job, they were suddenly offered a chance to quit, with full pay for the training period plus one month’s salary. Though it may sound like a waste of money, the offer was intentionally somewhat enticing so as to be self-selecting. The company figured it was a small price to pay to weed out employees who hadn’t completely bought into its mission. An employee who would rather take the easy money and leave would likely not be a loyal employee and would end up costing Zappos more money in the long run.

Levitt and Dubner include another example from their book SuperFreakonomics. In that, they describe work they did with a British bank to create an algorithm that would identify likely terrorists from the bank’s customer data. The algorithm had to be nearly foolproof to avoid false positives. One thing stood out from the data: Terrorists virtually never bought life insurance because if they died in some kind of suicide attack, the policy would not pay out. The scheme with the bank produced a certain number of names, which were handed over to the authorities.

That was the end of the authors’ involvement, but they described this whole dragnet in their book and wrote that one certain way terrorists could avoid detection was by purchasing life insurance from their bank. This resulted in a lot of angry mail and interview questions about revealing this to would-be criminals. However, their book was actually meant to set a trap—to weed the garden, so to speak. Though banks offer life insurance, virtually no one buys it there. Therefore, anyone who did would be identifying themselves as a likely terrorist.

Chapter 8 Summary: “How to Persuade People Who Don’t Want to Be Persuaded”

The topic of this chapter is persuasion—specifically, how difficult it is. The authors review some reasons for this and then give advice for what to try “if you are hell-bent on persuading someone” (167). First, you should know what you’re up against. Both people who are well educated on a topic and people who are uninformed resist changing their opinions. One study asked subjects questions pertaining to math and science, then asked their opinion about climate change. Contrary to what the researchers expected, those who did better on the math and science questions actually saw less of a risk in climate change. The reason might be that educated people are used to thinking they are right and are thus less likely to change their mind about something. However, there is not much evidence to suggest less-informed people are malleable either.

In trying to persuade others, people often focus too much on having foolproof logic. More often than not, this has no bearing on people’s opinions; they rely more on ideology and following others (“herd mentality,” as the California study about energy conservation showed in Chapter 6). Moreover, no argument is perfect, and people too often pretend theirs is. Refusing to admit to any shortcomings only makes people more likely to remain unconvinced. Here the authors use the example of driverless cars. Some have bought into the notion that this technology is a panacea for all car-related problems. But playing devil’s advocate, Levitt and Dubner pose potential problems that a debater would be wise to concede. For example, driverless cars might lead to more public drunkenness since people would no longer have to watch their alcohol intake to drive responsibly. Because we don’t know the future, it’s best to be clear-eyed about the undesirable possibilities a policy might cause—and admit them.

By the same token, your opponent’s ideas should be taken seriously and acknowledged. Many people not only forget this but let their emotions get the better of them in a debate and say something negative or insulting. This certainly closes the door to persuasion, as the authors note research showing that “bad is stronger than good” (180). In other words, people remember negative things more than they do positive things.

The most powerful method of persuasion is using stories. The authors distinguish stories from anecdotes, which are one-off events of limited value. Stories, on the other hand, use data and cover a certain time period to show either change or stasis. They make relationships between events clear to illustrate causes. By capturing people’s attention, stories can make even the driest subjects memorable. Steve Epstein, for instance, is a Defense Department lawyer who was tasked with reviewing rules for employees and who learned that simply reciting the rules was useless. Instead, he wrote a book of stories for each category based on events that actually happened. Because it was entertaining, it got the point across better.

Chapters 6-8 Analysis

While Chapters 3-5 discuss how to analyze problems, Chapters 6-8 examine how thinking like a Freak can be applied to create policy or solve problems. In a way, it’s the “other side” of research: how to use what you have learned. In that sense, these chapters comprise the theme of shaping behavior. Front and center is understanding and using incentives properly, covered in Chapter 6. People should focus on which incentives actually work, not what people say will work or what you think should work. A good example of this is Brian Mullaney’s risky new approach to fundraising for the organization Smile Train. Everyone thought people donated to such causes mostly for a moral incentive, and they thought it would be crazy to willingly give up a possible long-term donor. However, the authors explain that Mullaney’s approach worked because it was novel. Donors felt that Smile Train was being more candid, and donors gained control over their interaction with the organization. There were different incentives at play here, and it turned out that they worked better than those incentives long assumed to work best.

Just as important, Mullaney “changed the frame of the relationship” between the donors and the organization (125). As the authors explain, the relationship went from adversarial (the organization chasing after—almost hounding—donors) to cooperative (donors could set the terms). Changing the frame like this can reset a relationship that’s fallen into a predictable rut, often resulting in a big change. Another instance of this approach is Richard Nixon’s China policy in the early 1970s, which shifted from a rigid Cold War mentality to the beginnings of diplomacy.

Incentives fail when people don’t use the right ones in any given situation, the incentives fail to provide the proper motivation. Too often, those implementing incentives use what works for themselves, not the group of people they aim to influence. It’s also common to assume that people will behave the same way in the future as they do today, but changing conditions and rules cause behavior to change too. Moreover, expecting people to do the “right” thing rarely works since people usually act in self-interest, so you shouldn’t rely on moral incentives. Finally, there will always be individuals who try to game the system; you can’t eliminate this, you can only minimize it.

blurred text
blurred text
blurred text
blurred text