jrsinclair / James Sinclair

I live in Australia and like to write about JavaScript and other things.

There is one person in jrsinclair’s collective.

Huffduffed (856)

  1. AI, Deep Learning, and Machine Learning: A Primer

    "One person, in a literal garage, building a self-driving car." That happened in 2015. Now to put that fact in context, compare this to 2004, when DARPA sponsored the very first driverless car Grand Challenge. Of the 20 entries they received then, the winning entry went 7.2 miles; in 2007, in the Urban Challenge, the winning entries went 60 miles under city-like constraints.

    Things are clearly progressing rapidly when it comes to machine intelligence. But how did we get here, after not one but multiple "A.I. winters"? What’s the breakthrough? And why is Silicon Valley buzzing about artificial intelligence again?

    From types of machine intelligence to a tour of algorithms, a16z Deal and Research team head Frank Chen walks us through the basics (and beyond) of AI and deep learning in this slide presentation.

    Original video:
    Downloaded by on Sat, 27 May 2017 01:31:17 GMT Available for 30 days after download

    —Huffduffed by jrsinclair

  2. New Freakonomics Radio Podcast: The Folly of Prediction - Freakonomics Freakonomics

    Fact: Human beings love to predict the future.

    Fact: Human beings are not very good at predicting the future.

    Fact: Because the incentives to predict are quite imperfect — bad predictions are rarely punished — this situation is unlikely to change.

    But wouldn’t it be nice if it did?

    That is the gist of our latest Freakonomics Radio podcast, called “The Folly of Prediction.” This is the fourth of five hour-long podcasts we’ve been releasing recently. Some of you may have heard them on public-radio stations around the country, but now all the hours are being fed into our podcast stream. (You can download/subscribe at iTunes, get the RSS feed, listen live via the media player above, or read the transcript here.)

    We explore quite a few realms of prediction — most unsuccessful, some more so — and you’ll hear from quite a variety of people, probably more than in any other show. Among them:

    • Vlad Mixich, a reporter in Bucharest, who describes how the Romanian “witch” industry (fortune-tellers, really) has been under attack — including a proposal to fine and imprison witches if their predictions turn out to be false.

    • Steve Levitt (you’ve maybe heard of him?) explains why bad predictions abound:

    LEVITT: So, most predictions we remember are ones which were fabulously, wildly unexpected and then came true. Now, the person who makes that prediction has a strong incentive to remind everyone that they made that crazy prediction which came true. If you look at all the people, the economists, who talked about the financial crisis ahead of time, those guys harp on it constantly. “I was right, I was right, I was right.” But if you’re wrong, there’s no person on the other side of the transaction who draws any real benefit from embarrassing you by bring up the bad prediction over and over. So there’s nobody who has a strong incentive, usually, to go back and say, Here’s the list of the 118 predictions that were false. … And without any sort of market mechanism or incentive for keeping the prediction makers honest, there’s lots of incentive to go out and to make these wild predictions.

    Phil Tetlock found that expert predictors aren’t very expert at all.

    • Philip Tetlock, a psychology professor at Penn and author of Expert Political Judgment (here’s some info on Tetlock’s latest forecasting project) provides a strong empirical argument for just how bad we are at predicting. He conducted a long-running experiment that asked nearly 300 political experts to make a variety of forecasts about dozens of countries around the world. After tracking the accuracy of about 80,000 predictions over the course of 20 years, Tetlock found …

    TETLOCK: That experts thought they knew more than they knew. That there was a systematic gap between subjective probabilities that experts were assigning to possible futures and the objective likelihoods of those futures materializing …

    With respect to how they did relative to, say, a baseline group of Berkeley undergraduates making predictions, they did somewhat better than that. How did they do relative to purely random guessing strategy? Well, they did a little bit better than that, but not as much as you might hope …

    Christina Fang, whose research offers evidence that the people who correctly predict extreme outcomes are, on average, bad predictors.

    • Christina Fang, a professor of management at NYU’s Stern business school, also gives us a good empirical take on predictive failure. She wanted to know about the people who make bold economic predictions that carry price tags in the many millions or even billions of dollars. Along with co-author Jerker Denrell, Fang gathered data from the Wall Street Journal’s Survey of Economic Forecasts to measure the success of these influential financial experts. (Their resulting paper is called “Predicting the Next Big Thing: Success as a Signal of Poor Judgement.”) The takeaway: the big voices you hear making bold predictions are less trustworthy than average:

    FANG: In the Wall Street Journal survey, if you look at the extreme outcomes, either extremely bad outcomes or extremely good outcomes, you see that those people who correctly predicted either extremely good or extremely bad outcomes, they’re likely to have overall lower level of accuracy. In other words, they’re doing poorer in general. … Our research suggests that for someone who has successfully predicted those events, we are going to predict that they are not likely to repeat their success very often. In other words, their overall capability is likely to be not as impressive as their apparent success seems to be.

    • Hayes Davenport, a Freakonomics researcher takes a look at the predictive prowess of NFL pundits. (Short answer: not so good.)

    How hard is it to accurately forecast something as simple as corn yield? (Photo by Tim Boyle/Getty Images)

    • Joe Prusacki directs the statistics division at the USDA’s National Agricultural Statistics Service, which means he helps make crop forecasts (read a primer here). He talks us through the process, and how bad forecasts inevitably produce some nasty e-mails:

    PRUSACKI: Okay, the first one is: “Thanks a lot for collapsing the grain market today with your stupid” — and the word is three letters, begins with an “a” and then it has two dollar signs — “USDA report” … “As bad as the stench of dead bodies in Haiti must be, it can’t even compare to the foul stench of corruption emanating from our federal government in Washington, D.C.”

    Nassim Taleb asks: Are you the butcher, or are you the turkey?

    • Our old friend Nassim Taleb (author of Fooled By Randomness and The Black Swan) shares a bit of his substantial wisdom as we ponder the fact that our need for prediction (and our disappointment when it fails) grows ever stronger as the world becomes more rational and routinized.


    • Tim Westergren, a co-founder of Pandora (whom you may remember from this podcast about customized education), talks through Pandora’s ability to predict what kind of music people want to hear based on what we already know we like:

    WESTERGREN: I wouldn’t make the claim that Pandora can map your emotional persona. And I also don’t think frankly that Pandora can predict a hit because I think it is very hard, it’s a bit of a magic, that’s what makes music so fantastic. So, I think that we know our limitations, but within those limitations I think that we make it much, much more likely that you’re going to find that song that just really touches you.

    Robin Hanson, an economist at George Mason University argues that prediction markets are the way to go.


    • Robin Hanson, an economist at George Mason University and an avowed advocate of prediction markets, argues that such markets address the pesky incentive problems of the old-time prediction industry:


    HANSON: So a prediction market gives people an incentive, a clear personal incentive, to be right and not wrong. Equally important, it gives people an incentive to shut up when they don’t know, which is often a problem with many of our other institutions. So if you as a reporter call up almost any academic and ask them vaguely related questions, they’ll typically try to answer them, just because they want to be heard. But in a prediction market most people don’t speak up. So in most of these prediction markets what we want is the few people who know the best to speak up and everybody else to shut up.

    I hope you enjoy the hour. It was a most interesting exploration from our end. Thanks to the many, many folks who lent a hand and to our roster of truly excellent guests. See you on the radio.

    —Huffduffed by jrsinclair

  3. How to Be Less Terrible at Predicting the Future - Freakonomics Freakonomics

    How did typical Americans with no foreign-policy expertise come to make remarkably accurate predictions for U.S. intelligence officials? Not with Magic 8 balls. (photo: frankieleon)

    Our latest Freakonomics Radio episode is called “How to Be Less Terrible at Predicting the Future.” (You can subscribe to the podcast at iTunes or elsewhere, get the RSS feed, or listen via the media player above.)

    Experts and pundits are notoriously bad at forecasting, in part because they aren’t punished for bad predictions. Also, they tend to be deeply unscientific. The psychologist Philip Tetlock is finally turning prediction into a science — and now even you could become a superforecaster.

    Below is a transcript of the episode, modified for your reading pleasure. For more information on the people and ideas in the episode, see the links at the bottom of this post. And you’ll find credits for the music in the episode noted within the transcript.

    •     *     *

    [MUSIC: Pat Andrews, “Whoa”]

    There’s a website called Fantasy Football Nerd. It aggregates predictions from roughly 40 NFL pundits to produce what it calls “the industry’s most accurate consensus rankings.” Now, how accurate is the consensus? Let me give you an example: Earlier this season, the Carolina Panthers were playing the Seattle Seahawks. Only two of the pundits picked Carolina to win; 36 picked Seattle. And you could see why. Seattle has been one of the best teams in the league for the past several seasons; they won the Super Bowl two years ago, nearly repeated last year. They’d be playing Carolina in Seattle, where the home crowd is famously — almost punishingly — supportive. So even though Seattle had won only two games this season against three losses, and even though Carolina was an undefeated 4-0 at this point, the experts liked Seattle. They liked their pedigree. But Carolina won the game, 27-23.

    MICK MIXON (courtesy of Carolina Panthers Radio Network): It’s the hook and ladder. Lockette has it. Lockette is being tackled. Flips it. Ball’s loose. Recovered by Seattle at the 40. Carolina has won the football game! What an unbelievable, validating, respect-taking road win for the Carolina Panthers!

    Soon afterward, Carolina quarterback Cam Newton faced the media.

    REPORTER: Cam, before the Seattle game, a lot of the national media was down on this team. After you guys won that game, now a lot of the national media says, “This is one of the best teams we’ve seen this year.” Do you ever find it comical, the way that a lot of these people think that “hey, this team is on, this team is not good.”

    CAM NEWTON: I find all media comical at times. Because I think in your guys’ profession, you can easily take back what you say, and you don’t get — there’s no danger, you know, when somebody says it. You know, if there was a pay cut or if there was an incentive, if picking teams each and every week, you may get a raise, I guarantee people would be watching what they say then.

    So, first of all, let’s give Cam Newton a medal. Because he just articulated, in about 10 seconds, a big problem that experts in many fields — along with TV producers and opinion-page editors and government officials — either fail to understand or acknowledge, which is this: when you don’t have skin in the game, and you aren’t held accountable for your predictions, you can say pretty much whatever you want.

    JONATHAN BALES: I completely agree with Cam on that.

    [MUSIC: 40 Watt Hype, “Three and Out” (from Grand Unification Theory)]

    That’s Jonathan Bales.

    BALES: A lot of the beat writers in the NFL or across sports, they just can say what they want and there is no incentive for them to be correct. And I do think that for the most part they are very bad at making predictions.

    Bales can’t afford to be bad, because he plays fantasy sports for a living.

    BALES: People who have something to lose from their opinions or the predictions that they make, are incentivized to make sure that they’re right.

    Bales is 30 year old. He lives in Philadelphia. He’s written a series of books called Fantasy Football for Smart People. In college, he was a philosophy major but he also loved to analyze sports.

    BALES: Yeah, I was really interested in in-game strategy.  So, why are coaches doing all these things that, even anecdotally, they just seem very wrong.

    Many of the best fantasy-sports players, he says, have a similar mindset.

    BALES: We question things, and we want to improve, and we ask “why?” a lot.  Like, “Why am I making lineups this way? Is this truly the best way?”  Just always questioning everything that we do, taking a very, very data-driven approach to fantasy and adapting and evolving.

    Adapting and evolving. Using data to make better decisions. Challenging the conventional wisdom. That all doesn’t sound so hard, does it? Wouldn’t you think that all experts everywhere would do the same? Or, at the very least, wouldn’t you think that we would pay better attention to all the bad predictions out there — the political and economic and even sports predictions — and then do something about it? Why isn’t that happening?

    PHILIP TETLOCK: That is indeed the $64,000 question: Why very smart people have been content to have so little accountability for accuracy in forecasting.

    Today on Freakonomics Radio: let’s fix that! And while we’re at it, why don’t we all learn to become not just good forecasters but … superforecasters!

    [MUSIC: Tim Besamusca, “Wars Between The Stars Theme”]

    •     *     *

    [MUSIC: Sarah Schachner, “AM Stinger” ]

    If you’re a longtime listener of this program, you’ve met Philip Tetlock before.

    TETLOCK: I’m a professor at the University of Pennsylvania, cross-appointed in Wharton and in the School of Arts and Sciences.

    We spoke with Tetlock years ago, for an episode called “The Folly of Prediction.”

    TETLOCK: I think the most important takeaway would be that the experts think they know more than they do; they were systematically overconfident.

    Which is to say that a lot of the experts that we encounter, in the media and elsewhere, aren’t very good at making forecasts. Not much better, in fact, than a monkey with a dart board.

    TETLOCK: Oh, the monkey with a dartboard comparison — that comes back to haunt me all the time.

    [MUSIC: Danny Massure, “Mama Didn’t Lie” (from What It Is)]

    Back then, I asked Tetlock to name the distinguishing characteristic of a bad, and overconfident, forecaster.

    TETLOCK: Dogmatism.

    DUBNER: It can be summed up that easily?

    TETLOCK: I think so. I think an unwillingness to change one’s mind in a reasonably timely way in response to new evidence. A tendency, when asked to explain one’s predictions, to generate only reasons that favor your preferred prediction and not to generate reasons opposed to it.

    Tetlock knows this because he conducted a remarkable, long-term empirical study, focused on geopolitical predictions, with nearly 300 participants.

    TETLOCK: They were very sophisticated political observers. Virtually all of them had some postgraduate education. Roughly two-thirds of them had Ph.D.s. They were largely political scientists, but there were some economists and a  variety of other professionals as well.

    This study became the basis of a book that Tetlock titled Expert Political Judgment. It was a sly title because the experts’ predictions often weren’t very expert. Which, to Philip Tetlock, is a big problem. Because forecasting is everywhere.

    TETLOCK: People often don’t recognize how pervasive forecasting is in their lives — that they’re doing forecasting every time they make a decision about whether to take a job or whom to marry or whether to take a mortgage or move to another city. We make those decisions based on implicit or explicit expectations about how the future will unfold.  We spend a lot of money on these forecasts. We base important decisions on these forecasts. And we very rarely think about measuring the accuracy of the forecasts.

    Some of us may have been satisfied to merely identify and describe this problem, as Tetlock did. Some of us might have gone a bit further and raised our voices against the problem. But Tetlock went even further than that. He put together a team to participate in one of the biggest forecasting tournaments ever conducted. It was run by a government agency called IARPA.

    TETLOCK: IARPA is Intelligence Advanced Research Projects Activity. And it is modeled somewhat on DARPA. It aspires to fund cutting-edge research that will produce surprising results that have the potential to revolutionize intelligence analysis.

    And Tetlock was at the center of this cutting-edge research. He tells the story in a new book, called Superforecasting, co-authored by the journalist Dan Gardner. The book is both a how-to, if at a rather high level, and a cautionary tale, about all the flaws that lead so many people to make so many bad forecasts: dogmatism, as we mentioned earlier; a lack of understanding of probability; and a reliance on what Tetlock calls “vague verbiage.”

    DUBNER: In the book you mention a couple cases from history where the intelligence community did not do so well. The Bay of Pigs situation with JFK and then later the belief that Saddam Hussein had weapons of mass destruction. In both instances you write that it wasn’t about bad intelligence, it was about how the intelligence was communicated to government officials and to the public. So, what happened in those cases?

    TETLOCK: Well, in the context of the Bay of Pigs, the Kennedy administration had just come into power and they were considering whether to support an effort, by Cuban exiles and CIA operatives and others, to launch an invasion to depose Castro in April ’61. And the Kennedy administration asked the Joint Chiefs of Staff to do an independent review of the plan and offer an assessment of how likely this plan was to succeed. And I believe the vague-verbiage phrase that the Joint Chiefs analysts used was they thought there was a “fair chance of success.” And it was later discovered that by “fair chance of success” they meant about one in three. But the Kennedy administration did not interpret “fair chance” as being one in three. They thought it was considerably higher. So, it’s an interesting question of whether they would have been willing to support that invasion if they thought the probability were as low as one in three.

    DUBNER: As a psychologist, though, you know a lot about how we are predisposed toward interpreting data in a way that confirms our bias or our priors or the decision we want to make, right? So, if I am inclined toward action and I see the words “fair chance of success,” even if attached to that is the probability of 33 percent, I might still interpret it as a move to go forward, yes?

    TETLOCK: Absolutely. That’s one of the ways in which vague-verbiage forecasts can be so mischievous. It’s very easy to hear in them what we want to hear. Whereas I think there’s less room for distortion if you say “one-in-three” or “two-in-three” chance. It’s a big difference between a one in three chance of success and a two in three chance of success.

    DUBNER: A difference of one, if I’m doing my math properly.

    TETLOCK: Right.

    DUBNER: Now, the Bay of Pigs didn’t really change much in the intelligence community, you write. Surprisingly perhaps. But the WMD issue with Saddam Hussein in Iraq was an embarrassment to the point that the government wanted to do something about it. Is that about right, that IARPA was founded in part out of response to that?

    TETLOCK: I’m not sure I understand all of the internal decisions inside the intelligence community but I think that the false-positive judgment on weapons of mass destruction in Iraq did cause a lot of soul-searching inside the U.S. intelligence community and made people more receptive to the creation of something like IARPA, yes.

    IARPA was formed in 2006. One of its major goals is – and I quote — “anticipating surprise.”

    [MUSIC: Dot Dot Dot, “Standing On Top of the World”]

    TETLOCK: I think that’s why they decided to fund these forecasting tournaments.

    These forecasting tournaments would deal with real issues.

    TETLOCK: They all had to be relevant to national security, according to the intelligence community.  

    DUBNER: For instance?

    TETLOCK: So, whether Greece would leave the Eurozone was considered to be an event of national-security relevance.

    Some other questions:

    MARY SIMPSON: Whether the Muslim Brotherhood was going to win the elections in Egypt.

    BILL FLACK: Would the president of Austria remain in office?

    These are a couple of the forecasters on Tetlock’s team.

    SIMPSON: Will Russia’s credit rating decline in the next eight weeks?

    FLACK: There was the notorious China Sea question about whether there would be a violent confrontation around the South China Sea.

    TETLOCK: We were one of five university-based research programs that were competing. And the goal was to generate the most accurate possible probability estimates.

    DUBNER: What was IARPA trying to accomplish? Were they trying to really crowdsource intelligence? Were they trying to figure out how government intelligence could improve itself? Or what?

    TETLOCK: Well, I think crowdsourcing and improvement of probabilistic accuracy they saw as deeply complementary goals.


    TETLOCK: They set up the performance objectives in 2011, very much based on in-the-wisdom-of-the-crowd tradition. The idea being that the average forecast derived from a group of forecasters is typically more accurate than the majority, often the vast majority of forecasters from whom the average was derived. So they wanted to see whether or not we could do 20 percent better than the average, 30 percent, 40 percent, 50 percent as the tournament went on.

    DUBNER: OK, so what did you name your team?

    TETLOCK: The Good Judgment Project.

    It was an optimistic name, if nothing else. The team was put together by Tetlock; his research and life partner Barbara Mellers, who also teaches at Wharton; and Don Moore, from the Haas business school at Berkeley. But here’s the thing: you didn’t have to be an academic, or an expert of any kind, to join the Good Judgment Project or any of the other teams in the IARPA tournament. Anyone could sign up online – and tens of thousands of people did, eager to make forecasts about global events.

    TETLOCK: Each of the research programs had its own distinctive philosophy and approach to generating accurate probability judgments. I think we were probably the most eclectic and opportunistic of the research programs and I think that helped. And…

    DUBNER: Eclectic and opportunistic how? What do you mean by that?

    TETLOCK: Well, I think we were ready to roam across disciplines fairly freely. We just didn’t care that much about whether we offended particular academic constituencies by exploring particular hypotheses. So we got a lot of pushback on a lot of the things we considered. There was a big debate, for example, about whether it would be a good idea to have forecasters work in teams. And we didn’t really know what the right answer was. There were some good arguments for using teams. There were some good arguments against using teams. But what we did is we ran an experiment. And it turned out that using teams, in this sort of context, helped quite a bit. There was also a debate about whether it would be feasible to give people training to help reduce some common psychological biases in human cognition and again we didn’t know for sure what the answer would be but we ran experiments and we found out that it was possible to get a surprising degree of improvement by training people, giving people tutorials that warned them against particular biases and offered them some reasoning strategies for improving their accuracy. So, we did a lot of things that some psychologists or other people in the social sciences might have disagreed with, and we went with the experimental results.

    [MUSIC: Nicole Reynolds, “When We Meet Again” (from This Arduous Alchemy)]

    DUBNER: Give me now some summary stats on the Good Judgment Project’s performance overall. First of all, how long did the tournament end up lasting, Phil?

    TETLOCK: The tournament lasted for four years.

    DUBNER: OK. How many questions did IARPA pose?  

    TETLOCK: Roughly 500 questions were posed between 2011 and 2015, inclusive.

    DUBNER: And your team, the Good Judgment Project, gathered approximately how many individual judgments about the future?

    TETLOCK: Let’s see: thousands of forecasters, hundreds of questions, forecasters often making more than one judgment per question because they have opportunities to update their beliefs. I believe it was in excess of one million.

    DUBNER: OK. And how’d you do?

    TETLOCK: Well, we managed to beat IARPA’s performance objectives in the first year. IARPA’s fourth-year objective was doing 50 percent better than the unweighted average of the crowd, and our best forecasters and best algorithms were out-performing that even after year one. And they continued to out-perform in years two, three and four. And the Good Judgment Project was the only project that consistently outperformed IARPA’s year- one and two objectives, so IARPA decided to merge teams, essentially. So the Good Judgment Project was able to absorb some really great talent from the other forecasting teams. And each year, at the end of the year, we creamed off the top two percent of forecasters and we called them superforecasters. So the top two percent of roughly 3,000 forecasters would be about what 60 people or so. And the next year and the next year and on it would go.

    DUBNER: So, the way you’re describing the success of the Good Judgment Project now in your kind of measured academic tone of voice sounds pretty measured and academic. But let’s be real, you kicked butt, yes?

    TETLOCK: Yep. Fair enough.  

    DUBNER: And what did IARPA do, or how did they respond to the success of your team —in addition to, I assume, “Congratulations,” did they want to, I don’t know, hire a bunch of your superforecasters, or you?

    TETLOCK: I have heard people in the intelligence community express an interest in potentially hiring some superforecasters. I don’t know whether they have or not. Our superforecasters tend to be gainfully employed. But some of them might have been interested in that.

    •     *     *

    [MUSIC: Justin Dodge, “Dextrous” ]

    After several years of overseeing the Good Judgment Project — and, now, its commercial spinoff, Good Judgment Inc. — Philip Tetlock has come to two main conclusions. The first one: “foresight is real.” That’s how he puts it in his book, Superforecasting. The other conclusion has to do with what sets any one forecaster above the crowd. “It’s not really who they are,” Tetlock writes. “It is what they do. Foresight isn’t a mysterious gift bestowed at birth. It is the product of particular ways of thinking, of gathering information, of updating beliefs. These habits of thought can be learned and cultivated by any intelligent, thoughtful, determined person.”

    DUBNER: OK, so you ran this amazing competition, a long series of experiments, in which you identified these people who were better than the rest at predicting, in this case, mostly geo-political events. And what we really want to know is – again, as nice as that is, congratulations Dr. Tetlock, etc. etc. — we want to know what are the characteristics of the superforecasters. Because we all want to become a little bit more of one. So, would you mind walking us through some of these characteristics, Phil? Let’s start with — what about their philosophical outlook? A superforecaster tends to be what, philosophically would you say?

    [MUSIC: Tim Besamusca, “Wars Between The Stars Theme”]

    TETLOCK: They’re less likely than ordinary people, regular mortals, to believe in fate, or destiny. And they’re more likely to believe in chance.  You roll enough dice enough times and improbable coincidences will occur. Our lives are nothing but a quite improbable series of coincidences. Many people find that a somewhat demoralizing philosophy of life. They prefer to think that their lives have deeper meaning. They don’t like to think that the person to whom they’re married, they could have just as easily have wound up happy with 237,000 other people.

    DUBNER: What about their level of, let’s say, confidence or even arrogance. Is a superforecaster arrogant?

    TETLOCK: I think they’re often proud of what they’ve accomplished, but I think they’re really very humble about their judgments. They know that they’re just often very close to forecasting disaster. They need to be very careful. I think it’s very difficult to remain a superforecaster for very long in an arrogant state of mind.

    DUBNER: So would you say that humility is a characteristic that contributes to superforecasting then or do you think it just kind of travels along with it?

    TETLOCK: I think humility is an integral part of being a superforecaster, but that doesn’t mean superforecasters are chickens who hang around the maybe zone and never say anything more than minor shades of maybe. You don’t win a forecasting tournament by saying maybe all the time. You win a forecasting tournament by taking well-considered bets.  

    DUBNER: OK, so let’s talk about now their abilities and thinking styles. A superforecaster will tend to think in what styles?

    TETLOCK: They tend to be more actively open-minded. They tend to treat their beliefs not as sacred possessions to be guarded but rather as testable hypotheses to be discarded when the evidence mounts against them. That’s another way in which they differ from many people. They try not to have too many ideological sacred cows. They’re willing to move fairly quickly in response to changing circumstances.

    DUBNER: What about numeracy? Background in math and/or science and/or engineering? Is that helpful, important?

    TETLOCK: They’re not — there are a few mathematicians and statisticians among the superforecasters, but I wouldn’t say that most superforecasters know a lot of deep math. I think they are pretty good with numbers. They’re pretty comfortable with numbers. And they’re pretty comfortable with the idea that they can quantify states of uncertainty along a scale from 0 to 1.0, or 0 to 100 percent. So they’re comfortable with that.  Superforecasters tend to be more granular in their appraisals of uncertainty.

    DUBNER: And what about  the method of forecasting? Can you talk a little bit about methods that seem to contribute to superforecasters’ success?

    TETLOCK: One of the more distinctive differences between how superforecasters approach a problem and how regular forecasters approach it is that superforecasters are much more likely to use what Danny Kahneman calls the outside view, rather than the inside view. So, if I asked you a question about whether a particular sub-Saharan dictator is likely to survive in power for another year, a regular forecaster might get to the job by looking up facts about that particular dictator in that particular country, whereas the superforecasters might be more likely to sit back and say, “Hmm, well, how likely are sub-Saharan dictators who’ve been in power x years likely to survive another year?” And the answer for that particular question tends to be very high. It’s in the area of 85, 95 percent, depending on the exact numbers at stake. And that means their initial judgment will be based on the base rate of similar occurrences in the world. They’ll start off with that and then they will gradually adjust in response to idiosyncratic inside-view circumstances. So, knowing nothing about the African dictator or the country even, let’s say I’ve never heard of this dictator, I’ve never heard of this country, and I just look at the base rate and I say, “hmm, looks like about 87 percent.” That would be my initial hunch estimate. Then the question is, “What do I do?” Well, then I start to learn something about the country and the dictator. And if I learn that the dictator in question is 91 years old and has advanced prostate cancer, I should adjust my probability. And if I learn that there are riots in the capital city and there are hints of military coups in the offing, I should again adjust my probability. But starting with the base-rate probability is a good way to at least ensure that you’re going to be in the plausibility ballpark initially.

    DUBNER: What about the work ethic of a superforecaster? How would you characterize that?

    TETLOCK: You don’t win forecasting tournaments by being lazy or apathetic. You have to be willing to do some legwork and learn something about that particular sub-Saharan country. It’s a good opportunity to learn something about a strange place and a strange political system. It helps to be curious. It helps to have a little bit of spare time to be able to do that. So that I guess implies a certain level of socioeconomic status and flexibility.

    DUBNER: And what about I.Q.?

    TETLOCK:  I think it’s fair to say that it helps a lot to be of somewhat above-average intelligence if you want to become a superforecaster. It also helps a lot to know more about politics than most people do. I would say they’re almost necessary conditions for doing well. But they’re not sufficient, because there are plenty of people who are very smart and close-minded. There are plenty of people who are very smart and think that it’s impossible to attach probabilities to unique events. There are plenty of reasons why very smart people don’t ever become superforecasters and plenty of reasons why people who know a ton about politics never become superforecasters.

    It is very hard to become a superforecaster, Tetlock makes clear, unless you have a very good grip on probability.

    TETLOCK: We talk in the book with a great poker player, Aaron Brown, who’s the chief risk officer of AQR.

    AQR is an investment and asset-management firm in Greenwich, Ct.

    TETLOCK: He defines the difference between a great poker player, a world-class poker player and a talented amateur as: the world-class player knows the difference between a 60/40 proposition or a 40/60 proposition. Then he pauses and says, “no more like 55/45, 45/55.” And of course you can get even more granular than that in principle. Now, when you make that claim in the context of poker, most people nod and say, “Sure, that sounds right,” because in poker you’re sampling from a well-defined universe. You have repeated play. You have clear feedback. It’s a textbook case where the probability theory we learned in basic statistics seems to apply. But if you ask people, “What’s the likelihood of a violent Sino-Japanese clash in the East China Sea in the next 12 months?” Or another outbreak of bird flu somewhere? Or Putin was up to more mischief in the Ukraine, or Greece might begin flirting with the idea of exiting the Eurozone? If you ask those types of questions, most people say, “How could you possibly assign probabilities to what seem to be unique historical events?” There just doesn’t seem to be any way to do that. The best we can really do is, use vague verbiage, make vague-verbiage forecasts. We can say things like, “Well, this might happen. This could happen. This may happen.” And to say something could happen isn’t to say a lot. We could be struck by an asteroid in the next 24 hours and vaporized. 0.0000.1 percent. Or the sun could rise tomorrow. 0.99999 percent. So “could” doesn’t tell us a lot. And it’s impossible to learn to make better probability judgments if you conceal those probability judgments under the cloak of vague verbiage.

    •     *     *

    [MUSIC: Junebug, “Wish It Away” (from Junebug)]

    DUBNER: Let me ask you this: if you were asked to introduce one question into an upcoming presidential debate, let’s say, that you feel would give some insight  via the candidate’s answers, the insight into their views overall on forecasting — our limits and the need for it — what kind of question would you try to ask?

    TETLOCK: What a wonderful question that is. You’ve taken me aback, it’s such a good question. I’m going to have to think hard about that. I don’t have an answer right off the top of my head, but I would love to have the opportunity to draft such a question. It would be something along the lines of: Would it be a good thing for the advisors to the President to make an effort to express uncertainty in numerical terms and to keep record of how accurate or inaccurate they are over time? Would you like to have presidential daily briefings in which instead of the documents saying this “could” or “might” or “may” happen, it says, “Our best analysts, when we crowdsource, the probability seems to range somewhere between .35 and .6.” That’s certainly, that’s still a pretty big zone of uncertainty but it sure is  lot better than “could,” which could mean anything from 0.01 to 0.99.

    DUBNER: Now, can you imagine anyone saying they wouldn’t want that, though? Do you think there are those who would want to show they’re so, whatever, macho that, “No, no, no, no we don’t want to traffic in that.”

    TETLOCK: I think there’s vast variation among politicians and how numerate they are and how open they are to thinking of their beliefs as gradations along an uncertainty continuum rather than expressions of tribal loyalties. We have the story in the book about President Obama making the decision about going after Osama Bin Laden and the probability estimates he got about Osama’s location, and how he dealt with those probabilities. The probabilities ranged from about, I don’t know, maybe from about .4 to about .95 with a center gravity around .75. And the President’s reaction was to shrug and say, “Well I don’t know what to do with this. It feels like a 50/50 thing, like a coin toss.” Now, that’s an understandable reaction from a president who is about to make an important decision and feels he’s getting somewhat conflicting advice and feels like he doesn’t have closure on a problem. It’s a common way to use the language. But it’s not how the President would have used the language if he’d been sitting in a TV room in the White House with buddies watching March Madness and Duke University is playing and someone says, “What’s the likelihood of Duke winning this game?” and his friends offer probabilities ranging from 0.5 to about 0.95 with a center of gravity 0.75 once again. He wouldn’t say, “Sounds like 50/50.” He’d say, “That sounds like three to one.” Now, how much better decisions would politicians make if they achieved that improvement in granularity, accuracy, calibration? We don’t know. I think that if the intelligence community had been more diffident about its ability to assign probability estimates, the term “slam dunk” probably wouldn’t have materialized in the discourse about weapons of mass destruction in Iraq. I think the actual documents themselves would have been written in a more circumspect fashion. I think there were good reasons for thinking Saddam Hussein was doing something suspicious. I’m not saying that the probability would have been less than 50 percent. The probability might have been as high as 85 percent or 80 percent but it wouldn’t have been 100 percent.

    DUBNER: But I wonder how much of this is our fault — “our” meaning the public. Because you know when someone makes a decision that turns out poorly, not wrong necessarily but poorly, even if the odds were very much in his or her favor, we punish them for the way that turned out. I mean, forget about politics, go to something as silly as football. If a head football coach goes for it on fourth down when all the probability is encouraging him to do so, and his team doesn’t make it, we know what happens. All the sports fans come out and say, “This guy was an idiot. What the hell was he doing? He didn’t properly calculate the risk.” Whereas in fact he calculated the risk exactly right and maybe there was an 80 percent probability of success and he happened to hit the 20 percent. So, we don’t respond well to probabilistic choices. And maybe that’s why our leaders don’t abide by them.

    TETLOCK: That’s right. I mean, part of the obstacle is in us. We’ve met the enemy and the enemy is us. We don’t understand how probability works very well. We have a very hard time taking the outside view toward the forecast we make and the forecast other people make. And if we did get in the habit of keeping score more we might gradually become a little more literate.

    •     *     *

    So who are these people – these probability-understanding, humble, open-minded, inside-view people – that have the power of superforecasting?

    BILL FLACK: Until I got into grad school I was used to being the smartest person in the room. And grad school very quickly disabused me of that notion.

    [MUSIC: Mizimo, “The Path of Least Resistance” (from The Path of Least Resistance)]

    That’s one of them. His name is Bill Flack. He’s a 56-year-old retiree in rural Nebraska. And he is a superforecaster with the Good Judgment Project — one of the top two percent. Flack studied physics in college, got a masters in math. And even though he wanted to get his Ph.D. …

    FLACK: I just came to realize that I didn’t have the either the mental power or the commitment to the subject to pursue a Ph.D.

    As smart as he is, Flack admits he is not very worldly.

    FLACK: I often don’t read the newspaper at all, and when I do it’s generally the Omaha World-Herald, which isn’t remarkable for its foreign-policy coverage.

    Flack wound up working for the U.S. Department of Agriculture. He was semi-retired when he first read about the Good Judgment Project.

    FLACK: Basically, I thought it sounded kind of interesting, like “might be fun to try.”

    MARY SIMPSON: It’s an area that’s always been interesting to me — how people make decisions.

    And that is Mary Simpson, another of Tetlock’s superforecasters.

    SIMPSON: I grew up in San Antonio, Texas.  And spent my first 18 years there, had a typical suburban family — older brother, younger sister, stay-at-home mom.  Dad was an engineer.  You know, the typical breadwinner.  And I went to college in Dallas at Southern Methodist University and that was the time when a lot of women were discovering that they could do things besides get married and have children. So I sort of broadened my horizons, found economics, and was really interested in it and decided I wanted to do something besides get married and have kids.  I finished a Ph.D. from Claremont Graduate School and I went to work for the big local public utility Southern California Edison as an assistant economist.

    That’s where Simpson was still working when she got involved with the Good Judgment Project. It was just a few years after the financial crash, which Simpson had failed to foresee.

    SIMPSON: I had totally missed the 2007-2008 financial crash.  I had seen bits and pieces. I knew that there was certainly a housing bubble. But I did not connect any of the dots to the underlying financing issues that had really created the major disruption in the financial industry, and you know, the subsequent Great Recession.   

    Simpson didn’t think her forecasts for the Good Judgment Project would be much better.

    SIMPSON: You know, it’s one of those things where I’m a very analytical person, always decent in math, and learned over the years how to kind of assess situations and make predictions.  On the other hand, I’m fairly skeptical of forecasting.  My company spent thousands of dollars every year for the best in the class of economic forecasts — uh, that’s what they were.  We had to forecast. We had to understand where sales would go and be able to make predictions in order to be sure that there was enough power and to assess revenue levels and cost of electricity, and so forth.  So we relied on forecasts, but they were often wrong. So, again, I was hopeful to do a decent job but also very skeptical of the ability of anyone to forecast in certain arenas, especially.

    Simpson, like Bill Flack, got involved in the forecasting tournament mostly for fun.

    SIMPSON: I was only working part-time and felt like I needed to keep my brain engaged.

    It was a volunteer position; they weren’t being paid by the Good Judgment Project. Though they did get …

    FLACK: an Amazon gift certificate.

    What was it worth?

    SIMPSON: A couple hundred dollars. It was not a lot.

    FLACK: If you took the value of the Amazon gift certificate and divide it by the hours we put into it we were getting something like twenty cents an hour.

    [MUSIC: C-Leb the Kettle Black, “The Celebration” (from The Kettle Black)]

    So here were a couple of non-experts in the realm of geopolitics being asked to make a series of geopolitical predictions.

    FLACK: I didn’t have any background and had to learn it all from the start.

    SIMPSON: I really had very little expertise in terms of international events.

    FLACK: Pretty much every single question, I had to dig for background information.

    SIMPSON: You need to the understand facts on the ground, you need to understand the players, what their motives are.

    FLACK: Spent a lot of time with Google News, some time with Wikipedia, which I mostly used as a source of sources basically.

    SIMPSON: You know, I have an analytical bent.  I’m interested in doing research.  

    FLACK:  And, you know, pretty much had to educate myself up on the subject.

    SIMPSON: A lot of it is the work.  You have to do the work, you have to update, you have to really stay engaged.  And if you simply answer the questions once and let them go and don’t look at them again, you’re not going to be a very good forecaster.

    TETLOCK: One of the unusual things about how questions are asked in forecasting tournaments is that they’re asked extremely explicitly. It’s not just, “Will Greece leave the Eurozone?” There are very specific meanings to what leaving the Eurozone means and there’s a very specific timeframe within which this would need to happen.

    SIMPSON: It’s not simply answering “yes” or “no” on a question. The answer had to be, “What is your expectation of this event happening?”  In other words, is it 50 percent or is it 90 percent? So, you know, there was a certain amount of effort to figure out, “Well, what’s a good probability?”

    FLACK: Each of us learned from previous questions how, you know, whether they were being overconfident and underconfident on specific types of questions. We were getting pretty much constant feedback; every time a question resolved we knew whether we were right or wrong, whether we’d been overconfident or underconfident.  And we tried to look back and see what we had — on questions where we’d gone wrong, how we’d gone wrong; on questions where we’d done well, what we’d done right.  Were we lucky?  Had we followed a very good approach that we should apply to other questions?

    And so these typical Americans with no foreign-policy experience whatsoever wound up making remarkably accurate forecasts about things like the Grexit, or whether there would be conflict in the South China Sea.

    FLACK: One of the things I liked about Good Judgment was it gave me a pretext to learn about these various foreign-policy issues.

    SIMPSON:  I think there’s certain satisfaction in knowing that you’re actually helping research that will hopefully lead to better assessments and better forecasts on the part of government.

    FLACK: Certainly I’ve gotten a good deal less patient with the pundits who issue forecasts where, “Well, this could happen,” but don’t attempt to assign a probability to it, don’t suggest how it could go the other way.  You probably won’t like this answer but I’ve grown much less fond of radio news because in trying to make forecasts I’ve been really looking for details. And it annoys me greatly when the radio starts a story about something that could be interesting and then they go into anecdotes instead.  Public radio is as bad as the rest, I’m afraid.

    [MUSIC: Danny Massure, “Mama Didn’t Lie” (from What It Is)]

    Not today, friend-o! We are all about the details. For instance, here are what Philip Tetlock calls the Ten Commandments for Aspiring Superforecasters:

    1: “Triage. Focus on questions where your hard work is likely to pay off.” Pretty sensible.

    2: “Break seemingly intractable problems into tractable sub-problems.” OK, no problem.

    3: “Strike the right balance between inside views and outside views.”

    4: “Strike the right balance between under- and overreacting to the evidence.”

    5: “Look for the clashing causal forces at work in each problem.” That’s where the homework comes in, apparently.

    6: “Strive to distinguish as many degrees of doubt as the problem permits but no more.” OK, that one just sounds hard.

    7: “Strike the right balance between under- and overconfidence, between prudence and decisiveness.”

    8: “Look for the errors behind your mistakes but beware of rearview-mirror hindsight biases.” Did you get that one? Here, let me read it again: “Look for the errors behind your mistakes but beware of rearview-mirror hindsight biases.”

    9: “Bring out the best in others and let others bring out the best in you.” Not a very Washington, D.C. concept, but what the heck.

    10: “Master the error-balancing bicycle.” Wha? This one needs a bit more explanation:

    “Just as you can’t learn to ride a bicycle by reading a physics textbook,” Tetlock writes, “you can’t become a superforecaster by reading training manuals. Learning requires doing, with good feedback that leaves no ambiguity about whether you are succeeding … or … failing.” Now, if following these commandments sounds like a lot of work – well, that’s the point.

    DUBNER: What your book proves, among a lot of things that are interesting, I think the most fascinating, the most uplifting really is that this is a skill or maybe set of skills that can be acquired or improved upon, right? The people who are better than others at forecasting are not necessarily born that way. Not born that way at all, correct?

    TETLOCK: I think that’s a deep truth, a deep lesson of the research that we conducted. Sometimes I’m asked, “how is it that a group of people, regular citizens who didn’t have access to classified information working part time, were able to generate probability estimates that were more accurate than those generated by intelligence analysts working full-time jobs and with access to classified information. How is that possible?” And I don’t think it’s because the people we recruited are more intelligent than intelligence analysts. I’m pretty sure that’s not true. I don’t think it’s even because they’re more open-minded. And it’s certainly not because they know more about politics. It’s because our forecasters, unlike many people in Washington, D.C., believe that probability estimation of messy real-world events is a skill that can be cultivated and is worth cultivating. And hence they dedicate real effort to it. But if you shrug your shoulders and say, “Look, there’s no way we can make predictions about unique historical events,” you’re never going to try.

    [MUSIC: Human Factor, “You Know The Feeling” ]

    Philip Tetlock has been running forecasting tournaments for roughly 30 years now. And the success of the Good Judgment Project has dictated his next move.

    TETLOCK: It has led me to decide that I want to dedicate the last part of my career to improving the quality of public debate. And I see forecasting tournaments as a tool that can be used for that purpose. I believe that if partisans in debates felt that they were participating in forecasting tournaments in which their accuracy could be compared against that of their competitors, we would quite quickly observe the depolarization of many polarized political debates. People would become more circumspect, more thoughtful and I think that would on balance be a better thing for our society and for the world. So, I think there are some tangible things in which the forecasting technology can be used to improve the technology of public debate if only we were open to the possibility.

    •     *     *

    Coming up next week on Freakonomics Radio, it happens all the time: Some company or institution, maybe even a country, does something you don’t like. So you and maybe a few million friends of yours decide to start a boycott. This leads to a natural question: do boycotts work?

    •     *     *

    Freakonomics Radio is produced by WNYC Studios and Dubner Productions. This episode was produced by Arwa Gunja. Our staff also includes Jay Cowit, Merritt Jacob, Christopher Werth, Greg Rosalsky, Kasia Mychajlowycz, Alison Hockenberry and Caroline English. Thanks to the Carolina Panthers Radio Network for providing audio for this episode. You can now hear Freakonomics Radio on public-radio stations across the U.S. If you’re one of our many international podcast listeners — well, you should probably just move here. Or at least listen to our recent episode on open borders, called “Is Migration a Basic Human Right?”

    If you want more Freakonomics Radio, you can also find us on Twitter and Facebook and don’t forget to subscribe to this podcast on iTunes or wherever else you get your free, weekly podcasts.


    Here’s where you can learn more about the people and ideas behind this episode:


    Philip E. Tetlock, professor of management and psychology at the University of Pennsylvania, author of “Superforecasting: The Art and Science of Prediction“

    Jonathan Bales, co-founder of Fantasy Labs and author of the “Fantasy Football for Smart People“ book series

    Bill Flack, superforecaster with the Good Judgment Project 

    Mary Simpson, superforecaster with the Good Judgment Project 


    “Superforecasting: The Art and Science of Prediction” by Philip E. Tetlock and Dan Gardner (Crown, September 2015).

    “The Folly of Prediction,” Freakonomics Radio (September 2011).


    Learn more about the Intelligence Advanced Research Projects Activity (IARPA)

    More background on the Good Judgment Project 

    Good Judgment Inc., Tetlock’s commercial spin-off of the Good Judgment Project

    How to become a superforecaster 

    Fantasy Football Nerd

    —Huffduffed by jrsinclair

  4. How to Be Tim Ferriss - Freakonomics Freakonomics

    DUBNER: All right, so, Tim, let’s tackle some of our patented FREAK-quently asked questions. Feel free to give an expansive answer; feel free to give a lightning-round answer. There’s no right or wrong way to do these. Name the handful, or maybe it’s more than a handful, of things that you do — whether it’s rituals, whether it’s diets, sleep, exercise, whatever — things that you do to kind of keep yourself functional and happy and moving forward every day.

    FERRISS: Yeah. I wake up probably somewhere between 8:30 and 10 AM; I tend to stay up late. I sit down and meditate for 20 minutes. Then I brew tea, which is typically Pu-erh tea with turmeric and ginger added to it, to which I add coconut oil, which is high in medium chain triglycerides, which the brain likes very much. I consume that as I sit down and journal. There are two different journals that I’m currently using: the five-minute journal, which was created by a reader of mine, in fact. Really, really helpful for setting the tone and focus for the day. And then morning pages, which is really just a free-association exercise — good way to trap your monkey-mind on paper so it doesn’t distract you and sabotage you for the rest of the day. And between that point and lunch — these days, I’m often skipping breakfast — I will focus on creative, hopefully creative production or synthesis. So writing, recording, exploring, and if I have any type of admin or housekeeping, metaphorically, to deal with, that is done in the afternoon. I’d say that’s generally the routine. I, every night, have a very hot soaking bath. No bubbles, no jets. That’s sacrilegious.

    DUBNER: What is one thing you own that you should throw out but probably never will?

    FERRISS: The wooden shards of the targets that I hit when I was doing the Japanese horseback archery.

    DUBNER: You have them displayed or just stuffed in a drawer?

    FERRISS: They are respectively placed on a shelf, and I have no idea what I’m going to do with them. I think I might just—

    DUBNER: Sounds like you’re gonna keep them.

    FERRISS: I might give them away to people at some point. I just, they have such strong meaning for me. That would definitely be high on the list. I also have notebooks — just bookshelves and bookshelves of notebooks where I’ve recorded, for instance, almost all my workouts since I was about 16. I don’t think I need those.

    DUBNER: What’s your favorite sport to play and favorite sport to watch?

    FERRISS: Favorite sport to play — competitive sport?

    DUBNER: What is an example of a noncompetitive sport? Isn’t it then not a sport?

    FERRISS: That’s fair enough.

    DUBNER: No, I’m curious to know.

    FERRISS: Well, there’s a related—

    DUBNER: Like kite flying. Although we used to have kite fights when I was a kid.

    FERRISS: Yeah. No, acro yoga is something that I’m currently really delving into. It’s a combination of, in effect, yoga, acrobatics, and Cirque Du Soleil-type performances. The sports that I am best at or have been best at are generally those that I enjoy. I don’t like being really bad at things. 

    DUBNER: Welcome to the club.

    FERRISS: Yeah.

    DUBNER: You’re visiting New York now, which you do pretty regularly. It’s not uncommon to run into someone on the street asking for money. So it seems like everybody, over the course of their life, develops some kind of standard strategy for that scenario. What’s yours?

    FERRISS: I do not give money, and I’ll tell you why. I at one point paid a homeless gentleman in San Francisco to give me a tour of the entire sort of homeless underground in San Francisco.

    DUBNER: What did you pay him?

    FERRISS: It was through a service that I think is no longer around*. I think it was called Vayable? V-a-y-a-b-l-e. It was maybe 50, a hundred bucks, something like that? And he was very explicit and he said, “You should never give homeless people money.” And he showed me exactly where they—

    *CLARIFICATION: Vayable is still in business. According to their press team, they are “growing bigger than ever and now operating in more than 100 countries.”

    DUBNER: Right, says the homeless guy who’s getting paid by the agency.

    FERRISS: Who is getting money. So, right, you have to take that into account. But he walked me through the Tenderloin, through all these different areas, and he pointed out where to get clothing, where to get housing, where to get blankets, where to get food, where to get all these resources, and he said, anyone who is asking for money is doing so to buy drugs or alcohol.

    DUBNER: Tim, what is something that you believed for a long time to be true until you found out you were wrong?

    FERRISS: I believed for a very long time as an athlete that low-fat, high-carbohydrate was an optimal diet.

    DUBNER: Not just you, brother.

    FERRISS: Yeah. And I think there’s a decent amount of evidence, circumstantial or direct, to suggest that low-fat diets create a host of issues ranging from joint problems to amenorrhea, like the cessation of menstruation. I mean, it’s, it’s, I think entirely unnatural for sedentary people or for athletes.

    DUBNER: And also when you forbid people or discourage people from consuming a thing, whether in that case it’s fat, or it could be, you know, anything that you can think of, it’s not like most people will instead consume nothing. They’ll consume more of something else. So the complement, right? And in this case, the complement was a lot of carbs and a lot of sugars that contributed to — if we believe the science that we’re reading today —contributed to all kinds of chronic and underlying problems.

    FERRISS: Oh, absolutely. I mean, like, rice cakes? Might as well just inject yourself with insulin.

    DUBNER: So, I’m curious, when I read The 4-Hour Chef, it strikes me that you’re a very adventurous chef and eater, but when I hear you talk about your nutrition now, I am curious what you actually would put on a plate and put in your mouth. So if we were to leave this radio studio and say, “hey, let’s go get something to eat,” where would we go and what would you eat?

    FERRISS: I’m not purist about it, because I also know how to biochemically limit the damage that I might create. So if we wanted to go out and have sushi, and eat several pounds of rice, I could do that. It wouldn’t cause me any existential angst. 

    DUBNER: What would be your optimal meal? We’re in New York; there are many choices.

    FERRISS: Yeah. Optimal meal, I would say, would be grass-fed steak with vegetables, maybe some lentils for fiber.

    DUBNER: I’m down with that. No problem.

    FERRISS: And I can go out, and it is not clear to anyone eating with me that I am on a strange or restrictive diet when I order at a restaurant.

    DUBNER: Small question here: what is the best possible future invention or discovery for humankind?

    FERRISS: The first thing that comes to mind is functional safety precautions related to artificial intelligence, which I think is very difficult.

    DUBNER: Yeah, sure is.

    FERRISS: How do you create sort of stop-gap ripcords for an intelligence that is by definition intended to get to the point where it can do several million hours of human computation in the span of minutes or hours?

    DUBNER: I talk to people about it, I read about it, but it’s really hard for me to understand the contours of it. But the catch-22 part, it seems to me, is we want it to be good enough to be so good that we would be secondary. We would be the animals that somehow manage to create a better intelligence and therefore expendable.

    FERRISS: Yeah. I mean, this is a very, very prevalent and intense conversation among technologists right now. And there are those, of course, who believe that it’s summoning the demon and so on. There are those who think it will be a panacea. And there are those who believe it could be both. I tend to fall in that latter group. I mean, I do think that artificial intelligence could solve potentially the greatest dilemmas of our time.

    DUBNER: Which you would name as what? The fact that we die too early? The fact that we do stupid things?

    FERRISS: Yeah. I mean, you name it. I think space colonization or some variant thereof, climate change, world hunger, warfare or elimination thereof. I mean, it’s impossible to conceive of not only the solutions that A.I. would find to known problems, but the problems it would identify that we haven’t even noticed yet.

    DUBNER: I have no idea what even the next five years will bring, though, in A.I., much less 20 years from now. Maybe you do.

    FERRISS: I have some guesses, most of which I probably can’t talk about. But I would say that, imagine—

    DUBNER: What do you mean, you can’t talk about them? Because you know them to be true? You’ve told someone you won’t break the promise?

    FERRISS: That’s right. The latter. Just proprietary information from companies. But I would say this: imagine that a nuclear bomb were bits and bytes that could be transmitted through any broadband connection.

    DUBNER: Meaning replicable and scalable in a way that something physical like that is not?

    FERRISS: That’s right. That is far more uncontainable than a closely tracked amount of uranium or plutonium.

    DUBNER: That’s a very sobering note on which to end. So let’s not end there. All right, last question. If you had a time machine — and it sounds like you may know people who have time machines — when would you travel to and why and what would you do there?

    FERRISS: So I’m tempted to say that I would travel back in time to eliminate some dictator, tyrant, or so on. But I think that—

    DUBNER: Everybody would do that. Other people would take care of that.

    FERRISS: Other people would take care of that. So my knee-jerk response is that I would go back in time and have a lot of drinks with Ben Franklin.

    DUBNER: You do love Ben Franklin, I know, and there’s a lot of reasons to love him. But tell me why him.

    FERRISS: Because he wasn’t afraid to be an amateur, and as an amateur with a beginner’s mind, I think a fresh pair of eyes, he was able to create many, many breakthroughs in multiple fields that have shaped civilization and the world as we know it today. And he was also, though, at the same time, a bit of a merry prankster and a bit of a showman. And I just really enjoy that combination. Being able to accomplish very big, serious objectives while not taking yourself too seriously is something I aspire to.

    DUBNER: Well done, Tim Ferriss. Thanks for coming in.

    FERRISS: Thank you.

    —Huffduffed by jrsinclair

  5. How to Be More Productive (Rebroadcast) - Freakonomics Freakonomics

    Why are we all so obsessed with productivity? (Photo: Ulrich Baumgarten/Getty Images)

    Our latest Freakonomics Radio episode is called “How to  Be More Productive (Rebroadcast)”  (You can subscribe to the podcast at Itunes or elsewhere, get the RSS feed or listen via the media player above).

    In this busy time of year, we could all use some tips on how to get more done in less time.  First, however, a warning: there’s a big difference between being busy and being productive.

    Below is a transcript of the episode, modified for your reading pleasure. For more information on the people and ideas in the episode, see the links at the bottom of this post. And you’ll find credits for the music in the episode noted within the transcript.

    •      *      *

    [MUSIC: Beckah Shae, “Smile” (from Let it Snow)]

    Hello, Freakonomics Radio listeners. The holidays are a lot of fun, of course, but they also bring stress. And, around here, some unrealistic demands. At least in my view they’re unrealistic. The elves who make this podcast have demanded a two-week holiday. I know, right? Anyway – that means that we’re regifting this episode. We first put it out last April and since then, nearly 2 million people have listened to it. It’s called “How to Be More Productive,” so I guess that’s why. Hope you enjoy it, whether this is your first listen or not. Happy holidays.

    [MUSIC: Pat Andrews, “Get Faster”]

    CHARLES DUHIGG: It’s about sitting down and deciding, “I’m in charge about what I do with my time and what my goals are and how I manage my focus and how I control my brain.”

    ANDERS ERICSSON: With the right kind of training, any individual would be able to acquire abilities.

    ANGELA DUCKWORTH: What specifically are gritty people like?  What beliefs do gritty people walk around with in their head?

    LASZLO BOCK: So we were surprised that these things that everybody kind of says matter ended up not mattering.

    STEPHEN J. DUBNER: So, Levitt, I don’t know if you know, but it is Self-Improvement Month at Freakonomics Radio.

    STEVEN LEVITT: Yeah, I thought every month was Self-Improvement Month at Freakonomics Radio.

    DUBNER: You seem to always be working on improving something about yourself. So what is it these days?

    LEVITT: I’ve been working on two things. I am always working on golf and trying to be  better at golf.  And I’ve also been trying to learn German, which is a very different kind of endeavor for me.

    [MUSIC: The Bad Things, “The Longest Bar in the World” (from The Bad Things)]

    Steve Levitt is my Freakonomics friend and co-author; he’s an economist at the University of Chicago. Levitt is recently remarried, and his wife is German — which explains his desire to learn the language.

    DUBNER: Talk about how you learn. Are you self-taught or not?  

    LEVITT: I am primarily self-taught.But,  you know that one thing I value very highly is enjoyment and happiness. And I’m definitely willing to sacrifice being a better German speaker in order to actually enjoy the German practice I do. So, in some ways — it’s probably the exact wrong message to send to the people who are listening to this podcast — but I still think there’s some truth to it, right? One of the things that’s overlooked about  learning a new skill is that the only people who ever get good are the people who keep on doing it. And most people quit. probably rightly quit, because it looks enticing from the outside and it isn’t that much fun when they actually start trying to learn a new skill. But for me, with German, I definitely have been of the mind that it has to be fun. And if it’s not fun, I won’t do it.

    [MUSIC: Wolfram Gruss “Busy Berlin”]

    So that’s what Levitt’s working on – what are you working on? We asked Freakonomics Radio listeners to let us know.

    AROON PARTHASARATHY: Hi, my name is Aroon Parthasarathy, and I live in Sydney, Australia.

    JUSTIN XAVIER: My name is Justin Xavier. I live in Los Angeles,

    NATALIA: Natalia. I live in Moscow, Russia.

    AMY CORDER: Amy. I live in New Orleans, and I am an opera singer.

    NATALIA:  Teacher of English.

    PATRICIA ROSE: I’m a Ph.D. candidate.

    CARLY: I am an environmental engineer, and I definitely could use some help getting more things done.

    PAUL ORKISZEWSKI: My ambition is to improve my earning potential, learn more about the world, and unleash my inner-math geek.  

    JOHN GRAF: I want to help my oldest child with their science fair project and work with my younger daughter on her comic book.

    SHERNOFF: Learn to knit a scarf and try a new ethnic food.

    JAY-R PATRON:  I most definitely want to up my guitar-playing skills.

    GRAF: I want to read my 2-year-old to sleep every night and also take my wife to the movies from time to time.

    DAN DIRSCHERL: I’d like to become a better American.

    Okay, that’s a pretty big wish list. We’ve got our work cut out for us. Where do we start? Let’s start with … this guy.

    DUHIGG: Okay. My name is Charles Duhigg.

    Duhigg is a reporter and editor at The New York Times …

    DUHIGG: … And the author of The Power of Habit and, more recently, Smarter Faster Better: The Secrets of Being Productive in Life and Business.

    DUBNER: So, Charles when we put out a call-out to Freakonomics Radio listeners and told them that  we were working on self-improvement in several realms — productivity with you, but also expert performance with Anders Ericsson; and we asked people to tell us what they were most wanting to improve in their lives — I think productivity probably outpaced the others maybe 3:1 combined.

    DUHIGG: Wow.

    DUBNER: So plainly, the appetite for this is just off the charts and it got me wondering about why that appetite is so large.

    DUHIGG: I think it’s because our experience matches so poorly with our expectation. Right? We’re living through this age where they keep on telling us, “Look, we have all these devices for you now.” Right. We have email and we have a communications revolution and we have computers in everything that you can possibly touch and the idea should be that life gets easier. And instead, it’s just getting harder and harder.  And that doesn’t seem like how things are supposed to go.

    So how are things supposed to go? We’ll get into that.

    •      *      *

    We made this episode about productivity because that’s what you told us you wanted to hear an episode about.

    PATRICIA ROSE: Hi, I’m Patricia. I’m a PhD candidate in environmental sciences. I’m aiming to work on productivity.

    AARON PATHA: What I’d like to most improve is my productivity.

    JUSTIN XAVIER: The thing I most want to improve about myself is my productivity, for sure.

    BARRY: The thing I would most like to improve on is my productivity.

    NATALIA: The thing I’d like to improve about myself is productivity.

    JOHN GRAFF: I want to improve my own productivity.

    CAMERON: Improve my productivity.

    [MUSIC: Russel L. Howard III “Get Busy”]

    When I told Steve Levitt that so many people wanted to hear about productivity, he was not at all surprised.

    LEVITT: Productivity is the key to everything. If you can be 10 percent faster at getting the same thing done, then you got 10 percent of your time to do something you’d  rather do. So, when it comes to economics, if there’s a single measure we should care about it’s productivity of workers. I give a lot of credit to our listeners that they think like economists when it comes to productivity.

    And how does Levitt rank himself on the productivity scale?

    LEVITT: I’m actually strangely productive person, and I’m not quite sure why. You give me a pile of stuff to do, I get it done quickly.  Whether it’s something academic — or  when I got a new apartment, for instance. I took my four kids with me, and we did all of the furniture shopping for the entire apartment, for a six-room apartment, in under two hours, including the checking out and buying everything, with four kids!

    DUBNER: Okay, how’d you do that? What are your tricks?

    LEVITT: To tell the kids that everything looks great. “Let’s do it. Perfect. You got 15 more minutes and then we’re leaving. Let’s go.”

    On today’s show, Charles Duhigg will offer many more tricks – and deeper strategies – to help you become more productive, especially in a work environment but in your personal life as well. First, however, a warning:

    DUHIGG: There’s actually a big tension and a difference between efficiency and productivity. There’s actually a big difference between being busy and being productive.

    Duhigg’s most recent book, Smarter Faster Better, combines old-fashioned reporting and a survey of the academic literature to identify best productivity practices.

    His first book, The Power of Habit, did the same for habit formation. I had assumed the second book was sort of a continuation of the first; but Duhigg sees it as the opposite.

    DUHIGG: Because The Power of Habit is all about these decisions that you stop making, right? Choices that become automatic, that I simply stop thinking about. Whereas productivity is about re-grabbing control over the choices instead of simply reacting to what’s in my environment and all the cues around me; it’s about sitting down and deciding, “I’m in charge about what I do with my time and what my goals are and how I manage my focus and how I control my brain.”

    DUBNER: I’m curious, when you talk about productivity, what are you talking about?  Because I think when a lot of economists think about productivity, we think about them thinking about how to squeeze another widget out of that production line.

    DUHIGG: Absolutely. And I think for most people, that’s not productivity, right. I think what’s important to realize is that productivity means different things in different settings. And it’s not necessarily what economists mean when they say “just getting more widgets out of each machine each hour” or more cars off the assembly line for every hour that someone works. Instead, what it means is: helping people figure out how to achieve their goals with less waste and less anxiety and less stress and more opportunity to actually enjoy what they want to enjoy. So for some people, that might mean that I’m able to, like, blow through, you know, 30 emails in 30 minutes and get to inbox zero. But for other people it means I get to take my kids to school without having to rush, and I still feel okay when I get to the office.  But most importantly, what productivity really means is: it means a different way of thinking.

    [MUSIC: j. cowit, “Lazy Susan” (from Metamorphosis World Peace)]

    This is the crux of Duhigg’s book – that the only way you’ll change your outcomes is to think differently about how you’ve been arriving at those outcomes. It’s one of those statements that is obvious in retrospect but weirdly non-obvious to a lot of us caught up in the thrum of daily life. For instance:

    DUHIGG: When electricity was first popularized, there was this huge wave of factories that replaced their steam engines with electrical engines. And almost none of the productivity of those factories rose initially. This has been referred to in economics literature as the productivity paradox. And as researchers went back and they tried to figure out why, what they found is that all the factory managers had arrayed all of the machines, had lined them up on the factory floor, so that they could have these steam pipes that would run from machine to machine. And when they electrified the plants, they left all the machines in the same places; they just replaced the pipes with wires. It took like 20 or 25 years for plant managers to start saying, “Look, the strength of electricity isn’t simply a new power source. It’s that we can move these machines in ways that we can have workers work more efficiently or we can use less people or we can create an assembly line.” And that’s where the productivity increase really came from.  And the same thing is happening today.

    Meaning, it’s not enough to blithely accept the many new tools the digital revolution keeps shoving in our hands. We need to rethink how to best use them and toward what end.

    DUHIGG: There’s this debate about whether the digital revolution is really increasing productivity, and when economists and people with common sense take a step back, what they say is, “Look, it’s not about all these gadgets and apps; it’s about learning new ways to think about possibilities, new ways to think about our capacity for work.” And when that really gets spread through the population,  that’s when productivity really increases.

    That debate — about how much, or even whether, the digital revolution is actually increasing productivity — has been playing out here on Freakonomics Radio. One episode we did was called “Yes, the American Economy Is in a Funk – But Not for the Reasons You Think.” Another was called “How Safe Is Your Job?” Those conversations dealt mostly with the macro view. But hey, we’re all self-interested animals, aren’t we? So you want to know what the digital revolution means for you.

    DUHIGG: All of us only have 24 hours each day, but some people seem to get a lot more done within that 24 hours, and they seem less stressed and sort of worked up about it. And the reason why is not because they’re kind of hacking themselves or they’re pulling strings. They’re not really focused on efficiency, what they’re focused on is trying to figure out what are the right goals that I should be chasing after?

    DUBNER: Now, before we get into the specifics of what leads to a more productive life, whether in work or in the personal sphere, persuade us that the examples you’ll be using and the data that you’ll be presenting aren’t cherry-picked. In other words, persuade us that you’re not just telling success stories and then reverse-engineering them to present seemingly causal factors that might in fact be nothing more than correlation and perhaps even just coincidence.

    DUHIGG: I talked to, I don’t know, four or five hundred people for this book. And I had this basic rule, which was: that when someone told me something that they felt made them more productive, that I wouldn’t include it in the book unless it seemed to be universal. And so if I talked to four or five hundred people, I probably heard 300 different ideas about how to increase productivity. But what I would find is that one set of ideas would work for a group and then another group would say exactly the opposite. So a good example of this is, like, the fanatical devotion on one goal at all costs. When I talked to people in Silicon Valley, they would say, “Here’s the most important thing on being productive, is that you choose, like, one outcome and you just remain persistent.” And then I would talk to people in big companies and they’d say, “Here’s the thing about being productive. You have to be flexible. You can’t commit yourself to one goal.” And this happened again and again and again, except that I did notice that there was this small handful of consistent ideas that kept on coming up. As I boiled through all of these stories and all of these papers that I was reading and all of these experts, there were really only eight things that came up again and again and again.  

    [MUSIC: All Good Funk Alliance, “Slingshot Boogie” (from Slingshot Boogie)]

    Okay, as you heard, according to Charles Duhigg and his band of productivity freaks, there are eight key tools or skills. And they are? I believe we need a sound effect here, please.

    Thank you. Number one: motivation.

    DUHIGG: We trigger self-motivation by making choices that make us feel in control. The act of asserting ourselves and taking control helps trigger the parts of our neurology where self-motivation resides.

    DUBNER: Focus.

    DUHIGG: We train ourselves how to pay attention to the right things and ignore distractions by building mental models, which means that we essentially narrate to ourselves what’s going on as it goes on around us.

    DUBNER:  Goal-setting.

    DUHIGG:  Everyone actually needs two different kinds of goals. You need a stretch goal, which is like this big ambition, but then you have to pair that with a specific plan on how to get started tomorrow morning.

    DUBNER:  Decision-making.

    DUHIGG: People who make the best decisions tend to think probabilistically. They envision multiple, often contradictory, futures and then try and figure out which one is more likely to occur.

    DUBNER: Innovation.

    DUHIGG: The most creative environments are ones that allow people to take clichés and mix them together in new ways. And the people who are best at this are known as innovation brokers. They’re people who have their feet in many different worlds and, as a result, they know which ideas can click together in a novel combination.

    DUBNER: Absorbing data.

    DUHIGG: Sometimes the best way to learn is to make information harder to absorb. This is known in psychology as “disfluency.” The harder we have to work to understand an idea or to process a piece of data, the stickier it becomes in our brain.

    DUBNER: Managing others.

    DUHIGG: The best managers put responsibility for solving a problem with the person who’s closest to that problem because that’s how you tap into everyone’s unique expertise.

    DUBNER: Teams.

    DUHIGG: Who is on a team matters much, much less than how a team interacts.

    Okay, got that? Motivation, focus, goal-setting, decision-making, innovation, absorbing data, managing others, and teams. Some of these are obviously more geared toward workplace productivity, but we’ll see if we can’t smuggle them over the border into the personal realm.

    DUBNER: I was really taken  with your first chapter about motivation.  And I wonder if you could talk for a minute about how control plays into motivation. In other words, if I’m a parent wanting to motivate what I think is a lackadaisical teenager in school,  what works, what doesn’t work and so on?

    DUHIGG: So, in many ways, the foundation of motivation is what’s known as the “locus of control” in psychology. And everyone either has an internal locus of control, which means that they believe they control their own fate or an external locus of control, which means that they think things just happen to them and they’re powerless.

    DUBNER: Now, wait a minute, when you say that everyone has one or the other, it can’t be that black and white, plainly, right? The world is not divided into the external and internal.

    DUHIGG: But people exist along this continuum, right. And we’ve all met people who are one way or the other; we’ve all met people who sort of believe, “If I decide to climb that mountain, I can do anything.” And others who complain all the time, “You know, I wanted to get a better job, but my boss is mean to me, and I’m never lucky, and it doesn’t work out.” And  what’s interesting is that the influences of internal versus external locus of control are kind of surprising. Like, for instance, there’s been experiments that show that when teachers tell kids that they’re really smart, that they did well on a test because they must be really smart — that actually triggers our external locus of control because most people don’t believe that they have any influence over how smart they are. It’s either something you’re born with or it’s not.  Whereas when teachers tell kids, “You did great on this exam, you must have worked really hard” — that reinforces an internal locus of control because we all know, “I choose how hard I work.” And what we’ve found is that self-motivation and motivation in general seems to rely on believing like we’re in control.

    DUBNER: Okay, so the implication is that there’s a certain kind of compliment or praise that is more powerful or that leads to higher productivity, yes?

    DUHIGG: That’s exactly right.  What we know is that you can train people to believe that they’re in control of their own life, and more importantly, to get them addicted to that kind of pleasant sensation that kind of comes from being in control. One of my favorite examples of this is something that Mauricio Delgado, a neurologist, mentioned to me, is driving down the freeway. You know when you’re, like, stuck in traffic on the freeway and you see an exit and you know that it would take just as long to get home by taking that exit, but, like, your brain wants you to, like, turn the wheel and take the exit even though it won’t get you home any faster. That’s because we learn this kind of almost emotional pleasure that comes from taking control.

    You can see how you can practice this as an individual. But institutions are trying to improve as well. Duhigg writes about the U.S. Marine Corps overhauling their basic training a while back …

    DUHIGG: … because they were getting all these recruits who were kind of, like, wet socks. They didn’t know how to self-motivate. And so the guy who was in charge of the Marine Corps, Charles Krulak, who’s a general, said, “We need to start teaching people to have this internal locus of control. We need to teach them how good it feels to take control, to assert themselves, because then they’ll learn how to self-motivate.” And so he instituted a couple of rules and one of them was that you can only compliment people on things that are unexpected. So this drill sergeant told me that he never tells someone who’s a natural athlete that they just ran a good race. He only tells, like, the small kind of wimpy kids that they just did a great job running.  The Corps as a whole never tells anyone that there’s such a thing as natural-born leaders. Because that implies that you don’t have any control over whether you’re a leader or not. Instead what they do is they compliment shy people who take a leadership role. And they say to them, “Look, I know it was hard for you to do that, but you did a great job.”

    [MUSIC: Aubrey Agard, “Mister Sunshine” (from Mister Sunshine)]

    Coming up on Freakonomics Radio: how to make to-do lists that really work. How Google learned to build a better team. And how to define productivity on your own terms – as you’ll hear in a moment from our listener Hayley McCoy, from Bend, Oregon. I love Hayley’s level of self-examination, and I especially love how many times she can say the words “productive” and “productivity” in one sentence and still make perfect sense!

    Hayley McCOY: Hi, Freakonomics. I would like to improve my productivity but I think there’s a hidden complexity in the pursuit of productivity. I grapple with the question of, “What is productive?” I like to engage in creative activities like painting, writing, reading, but is that productive? I’m always striving to know how my time would be best spent, which has really made me dive into philosophy lately. I’m also trying to define productivity in my terms. I guess I’ll try to be productive in defining productivity for myself so that I can start being productive.

    •      *      *

    I’m Stephen Dubner, host of Freakonomics Radio, which means this is my show, which means I more or less lead our production team. As I was reading Charles Duhigg’s book about productivity – it’s called Smarter Faster Better — I had a rather unsettling realization, which I told Duhigg about.

    [MUSIC: Quel Bordel, “Aller En Soirees” (from Qui Ne Chanti Pass)]

    DUBNER: In the chapter on teams, you write at some length about the qualities of a good team but particularly the qualities of a leader of that good team.  You write, “Teams need to believe that their work is important. Teams need to feel that their work is personally meaningful. They need to have clear goals and defined roles. Team members need to know that they can depend on one another. But most important, teams need psychological safety.” So, I have to say, when I read that list I realized that  I am the world’s worst leader imaginable, that I don’t do any of that. I don’t think about it.

    DUHIGG: Well, I would actually guess that you’re better than you’re letting on.

    DUBNER: You would guess wrong. I’m going to tell you and you should, uh, you should speak with the other people on my team and they’ll back me up.

    DUHIGG: But I think you hit on something really, really powerful, which is that, that list of things that you just read, they are not efficient.  So one of the things that’s really important about creating the right group norms that make a team productive is that everyone has a chance to kind of socialize with each other a little bit, right? Cause you want to create this “high-average social sensitivity,” and the only way you do that is to get people to talk about their lives a little bit. Now, we’ve all had the experience where you go into a meeting and, like, for the first five minutes people just, like, talk about their weekend and their kids and who’s sick and they gossip and you think to yourself, “God, can we please just start this meeting? We’ve got business to get done.” And I have that same instinct, which is to say I want to prioritize efficiency. But study after study shows that if we spend a couple of meetings with that, five minutes of getting to know each other, over time, our group will actually be much, much more productive. So sometimes it’s about sacrificing the short-term efficiency for the long-term productivity.

    Duhigg’s view of productive teamwork comes largely from a massive research project at Google.

    DUHIGG:  We are lucky beneficiaries of the fact that Google decided to spend millions of dollars in four years trying to figure out how to build the perfect team.

    Google is consistently named among the best American companies to work for.

    DUHIGG: And they spent a lot of that time thinking that, like, the question was: who do you put together? Do you put introverts and extroverts? Do you want people who are friends away from the conference table or do you want, you know, a flat leadership system or like a really strong leader?

    DUBNER: You write a little bit further, having to do with Google —and I believe it was Laszlo Bock who runs their — what’s that division called — the People …

    DUHIGG: … People Operations.

    BOCK: Hi this is Laszlo.

    [MUSIC: The Harmed Brothers, “Carolina” (from Better Days)]

    Yes: Laszlo Bock, senior vice president of People Operations at Google.

    BOCK: I’m basically in charge of the care and feeding of our Googlers, our employees, and making sure they get here, they are happy, they are productive and they stay a long time.

    Under Bock, Google ran two productivity studies.

    BOCK: Years ago we did something called Project Oxygen and the underlying question behind Oxygen was, “Do managers matter?” And if they do, what can we do to make managers more effective? What can we do to create a place where management is essential and it’s as helpful as oxygen?

    Bock says Project Oxygen was useful, but as important as a manager may or may not be, there’s the rest of the team as well.

    BOCK: So we decided to look at teams as a unit of analysis. And Project Aristotle is all about figuring out how to make groups of people happier and more effective.  

    And Google, being Google, looked at a lot of data:

    BOCK: We looked at 200 different teams across every part of Google — every geography, all around the world, in sales, in marketing, in finance, and in engineering, everything we were doing. And the outcome metrics we looked at were not just performance ratings but measures of what kind of innovation came out of the team? How quickly were they moving? Did people stay on the team? Did they not? Were people happy? Were they not? What did other people outside the team think of that team’s performance? And then we spent a lot of time kind of crunching all those numbers and teasing through the qualitative interview data to try to isolate what actually was making a difference in team performance.

    Google’s findings did not jibe with a lot of earlier academic research, or other conventional wisdoms.

    BOCK: So for example, in the academic research it says consensus-driven decision-making is often better than, sort of, top-down direction.  And academic research says workload matters a lot. Having teams in the same location. We actually found none of those things were in the top five of what mattered in terms of effectiveness for teams.

    Here’s how Charles Duhigg puts it.

    DUHIGG: What matters isn’t who is on the team. What matters is how the team interacts.

    BOCK: So we were surprised that these things that everybody kind of says matter ended up not mattering. For example, the most important attribute of a high-performing team is not who leads it or who’s on it or how many people or where it is. It’s psychological safety.

    DUHIGG: … which means that everyone at the table feels like they have the opportunity to speak up and they all feel like each other is actually listening to them, as demonstrated by the fact that their teammates are sensitive to nonverbal cues.

    BOCK: We ask if the team members feel that they can fail openly or do they feel that they are going to be shunned by failing? We ask, do they feel as if other team members are supporting or undermining them?  

    Bock and his team identified five norms that the best Google teams had in common, beginning with psychological safety.

    BOCK: That unlocks all kinds of goodness.  Another one of our norms is dependability. Dependability is the notion that: you tell me to do something, I’m going to get it done and you can rely on me to get work done. Structure and clarity, actually two things but they sort of relate. Basically the idea is: people should know what everyone’s job is and that should be a shared understanding across the team. Another norm is meaning — that the work should be personally meaningful to every person in the room. And the last thing is impact — that team members need to think and believe that their work matters and actually creates change.

    Bock admits that most of these norms are pretty obvious. But that doesn’t mean everybody uses them. And there are other tricks a good manager should think about. For instance:

    BOCK: “Are you having regular one-on-ones?” — which is obvious, like you should have one-on-ones with your team members. Turns out most people don’t ‘cause they are not that fun, they’re kind of boring, they take time. But when you do them, your team performs better.

    Another example:

    BOCK: Are you making sure everyone in your team feels included?  Obvious, kind of logical, we should do that. But not everyone does, if you think about meetings that you may have been in, there is often somebody sitting off to the side that sits quietly for the whole meeting and never says anything. Rarely does the person leading the meeting say, “Hey you know, we haven’t heard from Frank during this entire meeting. Frank, what do you have to say?” Or “Gail, you’ve been silent this entire conversation, do you have a perspective?”  And so having a checklist that says, “Are you checking on these things, are you calling out the quiet people?” goes a long way to making teams more effective.

    But again, “effective,” like “productive,” isn’t necessarily the same as efficient.

    BOCK: One of the hardest things about looking at team performance is that it’s really hard to figure out what outcomes you care about. We want teams in every way to be more productive and efficient but also happier and stick around longer.

    Because continuity, in the end, can be extremely productive for any institution. But let’s say you work largely on your own, that you’re not a member of any work team, much less a leader — maybe you don’t even have a job per se; maybe you’re a craftsperson or a freelance consultant, or maybe an athlete or musician or a chef. In other words, you are on an island.

    [MUSIC: Israel Nash Gripka, “Let Me Down” (from Israel Nash Gripka)]

    When you don’t have a team leader to keep their eye on you, how do you think about productivity?

    DUHIGG: I think that in general people know when they’re actually productive.

    Charles Duhigg again.

    DUHIGG:  If you sat down with someone and you said to them, “Are you productive?” they would give some anodyne answer, right? Like, “Yeah, I’m pretty productive” or “No, I’m not that productive.” And then you would say, like, “Tell me what you did yesterday. Did you spend your time wisely?” I think that people could go through their day and they would tell you, “Look, I spent a couple of hours, like, watching soccer with my kid and that might not seem productive, but honestly, that’s really important time that me and my son have together. And then I spent another couple of hours working on emails and that might seem productive from the outside, but actually what I should have done was I should have just deleted a bunch more of them or ignored them because  they really won’t matter a week from now whether I sent that response or not.”  That people are very good at actually analyzing whether they’re productive or not. The problem is that very frequently, we don’t stop to analyze. We don’t reflect on what’s actually happened. And that part of this idea of managing your own brain, learning how your brain works so that you can take control of your focus and your motivation and how you manage yourself and others, is that it forces us to really sit back and analyze: am I spending my time the way that is really meaningful to me or am I simply reacting to other people’s priorities and the busyness of life?  

    DUBNER: One thing that I, and I think a lot of people wonder, especially if they are makers, you know, they’re responsible for their own projects, their own income, their own schedules and so on, but even if they are purely managers, even if they’re working within a firm, I think a big question is, how many projects or ideas seem to be the optimal number? You know, too few and we may not get much done, too many and we may never complete any. So what does the science have to say about that?

    DUHIGG: There’s actually a really interesting study that was done, where a couple of MIT academics got access to hundreds of thousands of emails that this one firm had sent. People were corresponding to each other and they could correlate it with data on how many projects people were working on and how much profits they were bringing in. And what they found is that there are some people who don’t work on enough projects. So, they might work on two or three things at the same time and they’re just not maximizing their opportunities. But there are also some people who work on 10 or 12 projects at once and they’re stretched too thin and as a result, they can’t spend enough time actually devoting attention to each project. So what was optimal was the people who were somewhere in the middle. But what was really interesting was that the kinds of projects that they chose were critical to how productive they were. Now you would think that what people would want to do is they would want to find a bunch of similar projects so they’re doing the same thing over and over, faster and faster and faster. It’s exactly the opposite. The people who were most productive were the ones who were seeking out new and different kinds of projects because it taught them something new with each iteration. That’s why they would only work on a handful of projects, you know, four or five at any given time, is because working on something new, it takes a lot of time. To learn takes more time than simply to execute. And yet it turns out that over time, the more you learn, the more value you add, and, as a result, the more productive you are.

    DUBNER: Talk for a minute about writing the perfect to-do list, and I’m curious to know whether you’ve been able to follow your own advice.

    DUHIGG: I have, actually. So it used to be that I wrote to-do lists I think like most people did. I would start by writing at the top of the page a couple of easy tasks because I want something that’s going to kind of like —

    DUBNER: Brush teeth.

    DUHIGG: Yeah, brush teeth or like, you know, read all of my emails or turn on my computer. Sometimes I would actually write at the top of my list things that I had already done because it felt so good to, like, start the day by crossing it off and feeling accomplished and this is actually — within psychology, this is known as using a to-do list for mood-repair. And it’s the exact opposite of productive. Because what happens is that I cross a couple of the easy things off and then I feel like I’ve accomplished something and then I give myself permission to go check Facebook and then 45 minutes later I pay attention to what’s going on again.

    DUBNER: Okay, fair enough, but let’s say, there must be some people who react in exactly the opposite direction, no? Some people who kind of feel like the pump is primed, and I’ve turned on my computer, and I’ve brushed my teeth and I’ve remembered to, you know, breathe, let’s say. Having accomplished all that, now I can, you know, buy that new insurance plan that I need to for my firm, which is the thing that I’ve been dreading.

    DUHIGG: Sure, and the question then becomes how do you remind yourself of the bigger task?  And so what psychologists recommend is that on your to-do list you have two types of goals. At the top of the page you write a stretch goal. And the stretch goal should be that big ambition. And then underneath that,  you should write something that makes that stretch goal tangible and into a plan.  And one of the systems for doing this is this thing called SMART goals, right?

    SMART – that stands for “specific,” “measurable,” “achievable,” “realistic,” and based on a “timeline.” The SMART-goal system was developed years ago within General Electric. Unlike the big, stretch goal, with a SMART goal …

    DUHIGG: …  you take a component of this big ambition and you say specifically what you want to get done and how you’re going to measure it and is that achievable? Like  if you want to do something, do you have to clear your schedule in the morning? Is it realistic? If you’re going to clear your schedule , does that mean that you don’t turn on your email at all because you know it’s going to distract you? And what’s the timeline for getting the sub goal done?  And it’s very easy to do this with a to-do list. I do it every single morning and it takes me 45 seconds to figure out what my stretch goal for the day is, what my SMART goal  for when I get to my desk, what exactly I’m going to do right away. It’s almost a habit, but it transforms how much I get done because if you sit down and at the top of your list it says, “Go buy the insurance plan for my entire company,” and then underneath you have this, like, very distinct plan, like, specifically what you’re going to do when you first sit down, how you’re going to measure it, what you need to do to make it achievable and realistic, how much time it’s going to take, it’s really easy to start. You’ve basically gotten over the hump.

    DUBNER: That’s such a good example. So, Charles, it’s a little after 5:30pm in the east where you and I are talking. And I’m just curious about your goal for the evening, maybe even your stretch goal for the evening. Some people look at 5:30 and they think “cocktail hour,” some people look at 5:30 and they think, “Let me get a little more work done.” What’s your stretch goal for the night?  

    DUHIGG: My stretch goal is to make it home for family dinner. I have to say my wife and I have established a family stretch goal of having family dinner twice — at least twice a week during the weekdays and we’ve worked backwards to reorganize our days so that we can do that.

    DUBNER: Way to go. Bon appetit.

    DUHIGG: Thanks.

    [MUSIC: Eric Hastings, “Dollar in My Pocket”]

    And thanks to Charles Duhigg for trying to make all of us a bit more productive. Will it work? I hope so – for me, and for you. Let us know if this episode indeed feeds your productivity beast in any meaningful way. You can send your feedback via Twitter or Facebook or the iTunes podcast store. You can also send an e-mail to I can’t promise to reply. In fact, there’s a very, very good chance I won’t. Because one way I stay productive is by saying “no” to just about everything I possibly can — and that includes replying to about 99 percent of my email. But I do read all of them, so please do let us know if you are getting more productive. And next week on Freakonomics Radio, we asked what talents or skills you wanted to get better at:

    LAUREN MADRONICH: I would like to get better at asking critical questions, both scientifically and interpersonally.

    SARAH CATE PFISHTER: I would really love the ability to become an expert performer.

    JON BALL: I would like to not hate to workout and exercise.

    CHERYL MECKLEY: My goal is to propel myself into pretty good improviser status one day.

    SACHIN SHAH: I would love to be able to sing a sappy, romantic song for my wife while accompanying myself on the guitar.

    KEN RYAN: In the spirit of [EDIT] the return of golf season , I would most like to shoot below 90 for the first time and then build upon that success

    We talk to the pioneering research psychologist Anders Ericsson about deliberate practice, the 10,000-hour rule, and how to get really, really good at just about anything:

    ANDERS ERICSSON: We [EDIT] find that with the right kind of training, any individual will be able to acquire abilities that were previously viewed as only attainable if you had the right kind of genetic [EDIT] talent.

    We also try to sort out a little disagreement between Ericsson and Malcolm Gladwell over the 10,000-hour rule:

    GLADWELL: I come from a very musical family. I know what musical talent looks like. I know that I don’t have it.

    Ten thousand hours, Malcolm Gladwell, a singing psychotherapist from Denmark – and more. That’s next week on Freakonomics Radio.

    •      *      *

    Freakonomics Radio is produced by WNYC Studios and Dubner Productions. Today’s episode was produced by Arwa Gunja. Our staff includes Shelley Lewis, Jay Cowit, Merritt Jacob, Christopher Werth, Greg Rosalsky, Alison Hockenberry, Emma Morgenstern, Harry Huggins and Brian Gutierrez. If you want more Freakonomics Radio, you can also find us on Twitter and Facebook and don’t forget to subscribe to this podcast on iTunes or wherever else you get your free, weekly podcasts.

    Here’s where you can learn more about the people and ideas in this episode:


    Charles Duhigg, reporter at The New York Times and author of Smarter Faster Better: The Secrets of Being Productive in Life and Business 

    Laszlo Bock, Senior Vice President of People Operations at Google, Inc.


    Smarter Faster Better: The Secrets of Being Productive in Life and Business by Charles Duhigg (Random House, 2016).

    The Power of Habit by Charles Duhigg (Random House, 2012).

    Work Rules!: Insights from Inside Google That Will Transform How You Live and Lead by Laszlo Bock (Twelve, 2015).

    “Praise for Intelligence Can Undermine Children’s Motivation and Performance,” Claudia M. Mueller and Carol S. Dweck (1998).

    “Google’s Project Oxygen: Do Managers Matter?,” by David A. Garvin, Alison Berkley Wagonfeld and Liz Kind (2013).

    “What Google Learned From Its Quest to Build the Perfect Team,” by Charles Duhigg, The New York Times, (February 25, 2016).

    “Managing the risk of learning: Psychological safety in work teams,” by Amy C. Edmondson (2002).

    —Huffduffed by jrsinclair

  6. How to Become Great at Just About Anything (Rebroadcast) - Freakonomics Freakonomics

    Sure, practice makes perfect — but how you practice matters even more than how much. (Photo: Harry Engels/Getty Images)

    Our latest Freakonomics Radio episode is called “How to Become Great at Just About Anything (Rebroadcast).” (You can subscribe to the podcast at iTunes or elsewhere, get the RSS feed, or listen via the media player above.)

    What if the thing we call “talent” is grotesquely overrated? And what if deliberate practice is the secret to excellence? Those are the claims of the research psychologist Anders Ericsson, who has been studying the science of expertise for decades. He tells us everything he’s learned.

    Below is a transcript of the episode, modified for your reading pleasure. For more information on the people and ideas in the episode, see the links at the bottom of this post. And you’ll find credits for the music in the episode noted within the transcript.

    •      *      *

    [MUSIC: Tobias Gebb and Trio West, “Auld Lang Syne” (from Plays Holiday Songs)]

    Before we get on with today’s show – an encore presentation of one of our most popular episodes ever, “How to Become Great at Just About Anything” — I’ve got a quick favor to ask. As you may know, Freakonomics Radio is produced by the public-radio station WNYC. And a big part of public-radio funding comes from listener donations. From people like … you! So please go to, click the donate button, and do your thing. You can also donate by texting – just send the word “freak” to the number 698-66 and you’ll get sent a link to the donation page. This is, of course, the best time of year to donate. Not only because it’s the season for generosity. Not only because it’s your last chance to qualify for a 2016 charitable-gift tax break. But also because your donation, right now, will be tripled. How’s that? The Tow Foundation, a generous supporter of WNYC, has offered to triple donations to Freakonomics Radio; they’ll contribute up to $50,000. So if you give $100, that’s a $300 donation to our program. And if you want to use up the Tow Foundation’s generosity in one shot, just send us $16,666.67, and that’ll land their $50,000. All you have to do is go to, click on donate, or text “freak” to 698-66. We have some great Freakonomics Radio swag to choose from, including Titleist golf balls marked with the Freakonomics Radio logo. Which are good for golfing but also, I’ve discovered, small dogs love to play with them. Probably cats too, but I don’t have any cats. In any case – happy holidays, thanks for your support, and thanks especially for listening to Freakonomics Radio.

    [MUSIC: Pat Andrews, “Quirky Get Faster”]

    Last week, we offered some advice on how to become more productive.

    CHARLES DUHIGG: There’s actually a big difference between being busy and being productive.

    Now that you’ve all mastered productivity, we’re moving on to something a bit more ambitious. How to become great at just about anything. Because that’s what you told us you wanted.

    [MUSIC: The Sometime Boys, “The Butterfly” (from Ice and Blood)]

    SARAH CATE PFISTER: I would really love the ability to become an expert performer.

    CHAD HYDRO: I compete in the sport of powerlifting and so if I could better perform in that sport, that would certainly be what I would most like to accomplish.

    ELENI GALATA: I would like to improve and excel at presenting my work in front of an audience.

    KEN RYAN: I would most like to shoot below 90 for the first time and then build upon that success.

    J.R. PATRON: Hi, Stephen and the rest of the Freakonomics team. This is J.R. Patron from Metro Manila, Philippines. I am most definitely want to up my guitar-playing skills. So how do I do it?

    How do you do it? How do you attain excellence in anything? Is it all about the genes, the natural-born talent? Or: is there an actual science of expertise?

    SUSANNE BARGMANN: So, my name is Susanne Bargmann, and I am a psychologist. And I work as a teacher and a supervisor here in Denmark.

    Bargmann lives …

    BARGMANN: … a bit north of Copenhagen, which is the capital of Denmark.

    Bargmann is 42, married, with two kids. About eight years ago, she and an American colleague were studying what they saw as a lack of progress in their profession.

    BARGMANN: And what we can see when we look at the research is that the outcome of psychotherapy hasn’t really improved over the last 40 years. And that had us puzzled. So we started looking in other directions to try and figure out why, or what would make us improve. And then we came across K. Anders Ericsson’s work on deliberate practice.

    STEPHEN DUBNER: Hello, Anders?

    ANDERS ERICSSON: Hi, Stephen!

    DUBNER: How are you?

    ERICSSON: I’m doing very well.

    And that is K. Anders Ericsson.

    ERICSSSON: … and I’m a professor of psychology at Florida State University in Tallahassee, Florida.

    Ericsson is the man of the hour on today’s show; we’ll get back to him soon. It was his research on something called deliberate practice that got the Danish psychologist Susanne Bargmann excited.

    BARGMANN: I’d been plowing through all the literature on deliberate practice, but it still seems a bit abstract when you read it. It was hard for me to really understand what it felt like so we started talking about how could try this out on ourselves. And after discussing this for a while, we decided if we are going to study the process it needs to be not our work, because we’re too close to our work to be able to see it. So we decided to pick up something else outside of our work and then apply the principles of deliberate practice.

    So Bargmann wanted to use deliberate practice to try to improve at something, but something personal, not her profession. What should she do?

    BARGMANN: When I was a kid, I had this dream of becoming a famous singer.

    Her favorite singer?

    BARGMANN: It was Whitney Houston. Oh, she was amazing.

    But the dream got deferred, and then …

    BARGMANN: Life took over, so instead, I became a psychologist and had a family and had a job.

    Now, however, many years later, as part of her job, Bargmann thought that maybe …

    BARGMANN: … I should give it a go and see if it was actually possible to improve my singing, improve my voice.

    So she got back to it. The first thing to do was record herself to see what she sounded like.

    BARGMANN: I started using this karaoke program, and I started singing. And then I started listening, and it was really horrible.  

    So did that mean that Susanne Bargmann just didn’t have the tools, or maybe the natural talent, to be good at what she wanted to be good at? Or was there a way to become less horrible? Maybe to become … even … great?

    •      *      *

    [MUSIC: The Society of Rockets, “Olivia Odyssey” (from Olivia Odyssey)]

    The research psychologist Anders Ericsson recently published, along with co-author Robert Pool, a book called Peak: Secrets from the New Science of Expertise.

    DUBNER: So, let’s pretend for a moment that I’m skeptical off the bat and I say, “Well, Professor Ericsson, is there a science of expertise? That sounds like a bit of an overreach, perhaps.” How do you respond to that?

    ANDERS ERICSSON: Well, I think this is what is exciting here about our work is that, for the first time, we really have been studying in more objective ways, pinpointing what it is that some people are able to do much better than other individuals.

    Among the many and diverse expert performers that Ericsson and his colleagues have studied:

    ERICSSON: Ballet dancers, gymnasts, and all sorts of athletes, a lot of coaches; we’ve looked at chess experts, surgeons, doctors, teachers, musicians, taxi drivers, recreational activities like golf, and even, there’s some research on scientists.

    Let me admit that I’ve been fascinated for years by Ericsson’s research. I was introduced to it by this guy:

    STEVE LEVITT: Dubner, how are you doing?

    Steve Levitt is my Freakonomics friend and co-author; he is an economist at the University of Chicago.

    DUBNER: So, Levitt, I still remember very well the day — it was maybe 10 years ago — when you called me up, and you said you had a great idea for a column that we were writing. You said it was this big, Swedish psychologist that you had met while you were on sabbatical at Stanford, I think. A fellow named Anders Ericsson. What was it about Anders and those conversations you had with him, and his research, that got you so excited?

    LEVITT: He was infectious. His ideas and his enthusiasm just set me on fire. It was interesting because he studied topics I hadn’t really thought could be studied, like expertise and learning. The beauty of Anders — he’s really an amazing academic in the sense that he just was so interested in what he did and also so interested in the truth and willing to be challenged. I do remember. I remember I had lunch with him, and I immediately came back and called you on the phone and said, “We’ve got to write about this guy. He’s amazing.”

    We did write about him, in a Freakonomics column for The New York Times Magazine. It was called “A Star Is Made.” It became one of the most popular things we ever wrote, I think, because it asked a very basic question: is the thing that we all call talent perhaps grotesquely overrated?

    LEVITT: The part that really resonated with me is the idea that absent hard work, no one is really great at anything — because it’s an interesting insight. We’d like to think that Wayne Gretzky or Michael Jordan or Taylor Swift just emerge as savants, but they don’t. If you start with someone with talent, and another person who has no talent, if the person with talent works just as hard as the person without talent, almost for certain they’re going to have a better outcome. So, if our measure is true virtuosity, true expertise, it seems unlikely to me that this populist version of “oh, you don’t have to be good; you just have to try hard,” I think that’s probably a fallacy. But I firmly believe the other direction, which is that: if you don’t try hard, no matter how much talent you have, there’s always going to be someone else who has a similar amount of talent who outworks you, and therefore outperforms you.

    ERICSSON: Exactly. 

    [MUSIC: Rudy Pusateri, “Hot Springy Bass”]

    Here’s Anders Ericsson again.

    ERICSSON: We actually find that with the right kind of training, any individual will be able to acquire abilities that were previously viewed as only attainable if you had the right kind of genetic talent.

    DUBNER: Would it be fair to say that the kind of overarching thesis of your work is that this thing that we tend to call talent, is in fact more of an accumulation of ability that is caused by what you’ve labeled “deliberate practice”?

    ERICSSON: I think that, that is a nice summary here of what we’re finding.

    [MUSIC: Wolfgang Amadeus Mozart, “Alla Turca”]

    For more than 30 years, Ericsson and his colleagues around the world have studied people who stand out in their field. They’ve conducted lab experiments and interviews; they’ve collected data of every sort, all in service of answering a simple question: when someone is very good at something, how did they get so good? If you can figure that out, the thinking goes, then any of us can use those strategies to also get much better at whatever we’re trying to do. You don’t necessarily need to have been born with a special talent, a special ability. Something like perfect pitch, or absolute pitch — that’s the ability to identify or produce a particular musical note, with no reference point. It’s an incredibly rare ability; roughly one in 10,000 people are thought to have it. And while having perfect pitch doesn’t guarantee that you’ll become a great musician or composer, it can be a big help. Consider one of the most acclaimed composers in history: Wolfgang Amadeus Mozart.

    ERICSSON: Mozart is famous for his ability to actually listen to any kind of sound and actually tell you what kind of note that sound corresponded to. That seemed like a magical ability that was linked to his ability to be outstanding in composing and playing music.

    But Ericsson has three points to make about Mozart. The first is that perfect pitch does not necessarily seem to be innate; it’s teachable, although it helps to start early. As evidence, Ericsson points to research showing that perfect pitch is much more common in countries like China.

    ERICSSON: In those countries where you’re actually speaking tonal languages, where the tone influences the meaning of words, it’s going to be much more frequent.

    DUBNER: Meaning people are trained from a very early age to identify pitch, yeah?

    ERICSSON: Well, that’s the only way you can identify the meaning of the words, because in Mandarin, the difference between different words is just the difference in their tone. So you actually need to be able to acquire that general ability, and what people have found is that you have a very high degree of individuals who exhibit perfect pitch in those countries. It’s becoming increasingly clear that, that is actually something that any individual, seemingly, with the right kind of training situation, can actually acquire, as long as they get the training early on, basically between four and six.

    DUBNER: So, rather than perfect pitch being this incredibly rare innate ability, it is a teachable ability, if you know how to teach it.

    ERICSSON: Exactly.

    [MUSIC: Wolfgang Amadeus Mozart, “Symphony No 14 K 114”]

    A second point about Mozart. Ericsson argues that as great as he was — having nothing to do with perfect pitch — that he wasn’t necessarily born that way; Mozart became Mozart by starting very young and training long and hard. We may think of him today as a freak of nature. But, Ericsson says:

    ERICSSON: If you compare the kind of music pieces that Mozart can play at various ages to today’s Suzuki-trained children, he is not exceptional. If anything, he’s relatively average.

    Did you catch that? Mozart as a young musician, compared to today’s good young musicians, would be relatively average. How can this be? This relates to the third point about Mozart. For his time, he was excellent. But over time, we humans generally become more excellent. Standards of excellence have risen, often a lot. In the book Peak, Ericsson writes of a more recent musical example: “In the early 1930s Alfred Cortot was one of the best-known classical musicians in the world, and his recordings of Chopin’s ‘24 Études’ were considered the definitive interpretation. Today teachers offer those same performances — sloppy and marred by missed notes — as an example of how not to play Chopin, with critics complaining about Cortot’s careless technique, and any professional pianist is expected to be able to perform the études with far greater technical skill and élan than Cortot. Indeed, Anthony Tommasini, the music critic at the New York Times, once commented that musical ability has increased so much since Cortot’s time that Cortot would probably not be admitted to Juilliard now.”

    ERICSSON: We have similar developments in any of the sports. In order to qualify to the Boston Marathon, if you could produce that kind of time, you would be competitive at the early Olympics.

    [MUSIC: Judson Lee Music, “Cheesy Race”]

    That’s right. In order to just qualify to run the Boston Marathon today, a male in the 18- to 34-year-old group has to have run a 3-hour, 5-minute marathon. That’s only about six minutes slower than the winner of the marathon in the first modern Olympics, in 1896. The current marathon world record? Two hours, two minutes, and fifty-seven seconds. That’s nearly 56 minutes faster than the Olympic gold medalist in 1896. Or consider the improvements in golf, which this year is returning to the Olympics after more than a century. In the 1900 Summer Olympics, the men played two 18-hole rounds; the American golfer Charles Sands won the gold medal with scores of 82 and 85, which, these days, wouldn’t get you on a good high school team in some parts of the country. Yeah, the equipment and ball have changed, a lot. But still: the undeniable fact — whether it’s golf or running the marathon or playing the piano — is that as a species we have improved a lot at just about everything. How? Have we been selectively breeding for talent? Perhaps.

    But, that is not what Anders Ericsson thinks is largely responsible. He thinks we’ve gotten so much better primarily because we’ve learned how to learn. And that if you study the people who have learned the best, and if you codify the techniques and strategies that they use, then we can all radically improve. But let me warn you: there’s no magic bullet. Improvement comes only with practice — lots and lots and lots of practice. You may have heard of the “10,000-hour rule”? The idea that you need to practice for 10,000 hours to become great at something? That idea originates from the research of Anders Ericsson and his colleagues. They were studying the most accomplished young musicians at a German academy.

    ERICSSON: We found that the average of that elite group was over 10,000 hours by the time they reach 20.

    [MUSIC: KP Devlin, “Shampoo Party Zone” (from Occidental Taurus)]

    The secrets really boil down to one word: practice. Not just volume of practice — although we’ll get into that later. But the quality and the nature of the practice. There’s “purposeful practice,” for instance.

    ERICSSON: Purposeful practice is when you actually pick a target — something that you want to improve — and you find a training activity that would allow you to actually improve that particular aspect. Purposeful practice is very different from playing a tennis game or if you’re playing basketball scrimmages. Because when you’re playing, there’s really no target where you’re actually trying to change something specifically and where you have the opportunity of repeating it and actually refine it so you can assure that you will improve that particular aspect.

    And then there’s deliberate practice.

    ERICSSON: We think of deliberate practice requiring a teacher that actually has had experience of how to help individuals reach very high levels of performance.

    DUBNER: I want to go through one by one the components of deliberate practice and have you explain a little bit more if necessary, or acknowledge why they are important. So you write that “deliberate practice develops skills that other people have already figured out how to do and for which effective training techniques have been established.”

    ERICSSON: And I think that’s key.

    DUBNER: Which I guess helps us explain why a pianist from 80 or 100 years ago who was considered the gold standard is now considered not very good, because the instruction is built on top of itself to get people better faster, yeah?

    ERICSSON: Exactly, and I think the same thing in sports, where new techniques will allow individuals to reach kind of a higher level and practice more effectively than previous generations.

    DUBNER: You write that “deliberate practice involves well-defined, specific goals, and often involves improving some aspect of the target performance. It is not aimed at some vague, overall improvement.” Do you think that is a mistake that many people make when they’re trying to, “get better at something?” A “vague, overall improvement”?

    ERICSSON: I think that is one of the most important pieces that we’re advocating, because you need feedback in order to be able to tell what kind of adjustments you should be making. If you don’t have a clear criterion here for what it is that you were doing, then it’s unclear how you actually  are going to improve if you get subsequent opportunities to do the same thing. So anytime you can focus your performance on improving one aspect, that is the most effective way of improving performance.

    DUBNER: Here’s another component. You write: “Deliberate practice takes place outside one’s comfort zone and requires a student to constantly try things that are just beyond his or her current abilities.” That sounds horrible, first of all. You write, “Further thus it demands near-maximal effort, which is generally not enjoyable.” So you just discouraged everyone from ever wanting to do deliberate practice. But why is that important? Do you want to get out of what’s comfortable because that enables you to try harder in a way that you otherwise can’t?

    ERICSSON: Well, I think this has to do with the body. If you’re just doing things that feel comfortable and go out and jog, the body basically won’t change. In order to actually change your aerobic ability, people now know that the only way you can do that is if you practice now at a heart rate that is above 70 percent of your maximal heart rate. So it would be maybe around 140 for a young adult. And you have to do that for about 30 minutes at least two or three times a week. If you practice at a lower intensity, the body will actually not develop this difficult, challenging biochemical situation, which will elicit now genes to create physiological adaptations. 

    DUBNER: Let’s say I’m a crummy piano player, and I want to become a good piano player. For something like that, or for something like writing, or for something like selling insurance, what does it mean to get outside of one’s comfort zone and why does that improve my ability to get good?

    ERICSSON: Deliberate practice relies on this fact that if you make errors, you’re going to find ways to eliminate those errors. So if you’re not actually stretching yourself outside of what you already can do, you’re probably not engaging in deliberate practice.

    •     *     *

    BOB FISHER: The thing which really enabled me to do all this was Ericsson’s deliberate-practice model.  

    Bob Fisher is a soil-conservation technician for the Natural Resource Conservation Service in Seneca, Kansas. Fisher has a number of world records.

    [MUSIC: Pat Andrews, “Basketball Boys”]

    FISHER: I currently hold 14.

    All the records are in free-throw shooting.

    FISHER: The first one is the one-minute record, I hold it with 52 currently; most basketball free throws in one minute by a pair using a limited number of balls; most free throws in two minutes while alternating hands; most free throws in a minute by a pair using two basketballs, most free throws in one minute while alternating hands; most free throws while standing on one leg; most blindfolded free throws in 1 minute; most underhanded free throws in one minute; most basketball free throws in 1 minute by a mixed pair; this one I am proud of: most basketball free throws in one hour, 2371.

    Fisher is 58 years old, six feet tall. He’s been playing basketball a long time.

    FISHER: In high school, I started as a senior for a very small school and, no accolades, didn’t make any area teams or all-star teams or anything like that, at all. And I never considered going on and playing college ball because, quite frankly, I wasn’t good enough.

    So how did he become one of the most accomplished free-throw shooters on the planet? By devising a physics-based approach to shooting, augmented by Anders Ericsson’s gospel of deliberate practice.

    FISHER: And what he said was that people who continue to get better never allow themselves to go on automatic pilot; they’re continually breaking down the element they are trying to do and working on pieces and then putting it back together — which is nothing new. But I made a concerted effort to do that, and I think that was a large part, a reason of my success.

    And when Anders Ericsson talks about getting out of your comfort zone as a component of deliberate practice, Bob Fisher very much knows what he means.

    FISHER: Instead of just practicing, you are focused; you’re engaged; it’s like a rubber band. You are constantly stretching the rubber band, and you don’t want to stretch it to the point that it breaks, but you want it to have continual pressure. In other words, you want to try and do things that you are not able to do at the present time.

    This leads to one of the most compelling angles of deliberate practice – the neuroscientific angle. The idea that the brain not only steers our practice, but is also shaped by it.

    ERICSSON: I think this is one of the areas where we know the most .

    That’s Ericsson again. In Peak, he writes about a fascinating study by Eleanor Maguire, a neuroscientist at University College London. Maguire used MRIs to compare the brain growth of London taxi drivers and London bus drivers.

    ERICSSON: In London, taxi drivers have to memorize all the routes in the London area, and this is a process that takes a lot of training, and it basically takes years to master that body of knowledge.

    Bus drivers, meanwhile, with a set route, spend a lot less time pushing their brains to master new material.

    ERICSSON: And when you compare now these taxi drivers with bus drivers, you find this big difference in their brains. So, the process of encoding and mastering all these maps is associated with a change in the brains.

    So, you might have the most experienced bus driver in the world. But experience of that sort – driving the same route over and over and over again – doesn’t seem to lead to growth. Which, if you move the conversation out of transportation and into something like medicine … well, I asked Ericsson about that.

    DUBNER: There’s a scary part of your book that is about how many people in many professions, as they do it longer, they get more experienced, and there’s an assumption that they’re getting better and better. But you write that, “Once a person reaches that level of “acceptable performance and automaticity” you write, the additional years of “practice” don’t lead to improvement.” Can you talk for a moment about the value of experience for doctors, let’s say?

    ERICSSON: I think this points out that difference between deliberate practice and experience. If you’re just doing the same thing over and over, you’re not going to prepare yourself for dealing with a complicated situation. When we analyze the outcomes of medical procedures, just the mere number of procedures that you completed is not related now to the outcome. It turns out that surgery is a little bit different, because there, you often get very immediate feedback, especially about failures.

    DUBNER: But, you’re saying that it could be that a doctor who’s freshly out of medical school might be on some dimensions, at least, maybe some important dimensions, better than a doctor with 20 years experience?

    ERICSSON: Well, it’s interesting. When it comes to actually diagnosing heart sounds, when you test people with recordings of heart sounds, it turns out that general practitioners — basically their ability to diagnose decreases as a function of the number of years in their practice. And it sort of makes sense. How would you be able to know basically that you’re making mistakes? Even if you realize that a patient was incorrectly diagnosed, you won’t remember exactly what the heart sound sounded like. And what’s kind of nice is that now they’ve developed courses, so within a weekend of training, where you are trying to diagnose particular heart sounds, you can now get up to a level to when you had graduated from medical school.

    DUBNER: Many people listening to this are, I’m sure, familiar with the 10,000-hour rule, which you had a hand in defining. First of all, what is the 10,000-hour rule, if there is such a thing, as you understand it?

    ERICSSON: Our research showed, to the surprise of a lot of people, that even the most talented musicians at a music academy in Germany, that they actually had spent more time practicing by themselves than less-accomplished musicians. And we basically found that the average of that elite group was over 10,000 hours by the time they reach 20.

    [MUSIC: Judson Lee Music, “Wanna Be Spy”]

    Most people who have heard of the 10,000-hour rule, heard of it via the book Outliers, by Malcolm Gladwell. Outliers looked at how extraordinarily accomplished people accomplished what they did.

    ERICSSON: Now, right. Gladwell basically thought that was kind of an interesting magical number and suggested that the key here is to reach that 10,000 hours. I think he’s really done something very important, helping people see the necessity of this extended training period before you reach high levels of performance. But I think there’s really nothing magical about 10,000 hours. Just the amount of experience performing may in fact have very limited chances to improve your performance. The key seems to be that deliberate practice, where you’re actually working on improving your own performance — that is the key process, and that’s what you need to try to maximize.

    DUBNER: You write that this rule, or the number, really — 10,000, nice, big round number — is “irresistibly appealing.” “Unfortunately,” you write, “this rule, which is the only thing many people today know about the effects of practice, is wrong in several ways.” One example that you give, that Malcolm Gladwell writes about in Outliers that you say looks good on first glance, maybe to a layperson, but falls apart upon inspection, is the Beatles playing all those nights at clubs in Hamburg. Can you talk about why that example doesn’t serve as an example of what you’re talking about deliberate practice representing?

    ERICSSON: So to us, the Beatles — and I think a lot of other people agree — what really made them outstanding was their composing of a new type of music. So it wasn’t like they excelled as being exceptional instrumentalists. So if we want to explain here their ability to compose this really important music, deliberate practice should now be linked to activities that allowed them to basically improve their compositional skills and basically get feedback on their compositions. So counting up the number of hours that they performed together wouldn’t really enhance the ability here to write really innovative music.

    DUBNER: So the very popularized version of one big piece of your research gets a lot of things wrong, according to you. How much does that bother you?

    ERICSSON: Well, the one thing that I’m mostly concerned about is, and I’ve met a lot people who are counting hours that they’re doing something and then assuming here that accumulating enough hours will eventually make them experts. Because I think that is a fundamental, incorrect view that is so different from what we’re proposing — namely, that you intentionally have to increase your performance, and you have to be guided, ideally by a teacher, that would allow you now to incrementally improve. So that idea that people actually think that they’re going to get better when they’re not — that, I find, to be the most troubling.

    DUBNER:Have you talked with Malcolm about what you feel he got wrong?

    ERICSSON: Have not ever spoken to Malcolm Gladwell. And I think that could have avoided some of his summaries of that work in Outliers, but I never interacted with him.

    DUBNER: All right, so if I run into him anytime soon, would you like me to pass along a message of some kind?

    ERICSSON: I’m really impressed with his books, and I think that they’ve caught a large audience. And if we were able now to channel that interest in improving yourself by now suggesting how you really need to invest the time to improve your performance — I think that would be terrific. If he doesn’t agree with our analysis here, I think it would be important that he explains why he views that basically it’s not so important exactly what you do, but it’s more important with the hours.

    [MUSIC: House of Trees, “I am a Clown”]

    MALCOLM GLADWELL: The 10,000-hour stuff that I put in Outliers was really only intended to perform a very specific narrative function — or not narrative function, but argumentative function. 

    And that is Malcolm Gladwell.

    GLADWELL: To me the point of 10,000 hours is: if it takes that long to be good, you can’t do it by yourself. If you have to play chess for 10 years in order to be a great chess player, then that means that you can’t have a job, or maybe if you have a job it can’t be a job that takes most of your time. It means you can’t come home, do the dishes, mow the lawn, take care of your kids. Someone has to do that stuff for you. That was my argument, that if there’s a kind of incredibly prolonged period that is necessary for the incubation of genius, high-performance, elite status of one sort of another, then that means there always has to be a group of people behind the elite performer making that kind of practice possible. And that’s what I wanted to say.

    DUBNER: So there’s a sentence in, I believe, it’s in the chapter called “The 10,000-Hour Rule” in Outliers where you write that “10,000 hours is the magic number of greatness.” I understand that was one sentence within many paragraphs within many chapters that’s trying to prove your larger point, and yet, I’ve heard from a lot of people— and I’m guessing for every one I’ve heard from, you’ve heard 50 — who’ve embarked on these trajectories, where “I want to be a ballerina, a golfer, a whatever, whatever, whatever, and if I can get to 10,000 hours, that will make me great.” So that seems to be a causal relationship. How do you feel about people drawing that conclusion and taking action on it?

    GLADWELL: Well, elsewhere in that same chapter, there is a very explicit moment where I say that you also have to have talent. That, what we’re talking about with 10,000 hours is: how long does it take to bring talent to fruition? To take some baseline level of ability and allow it to properly express itself and flourish. Ten thousand hours is meaningless in the absence of that baseline level of ability. I could play music for 20,000 hours. I am not becoming Mozart — never, ever, ever. I can play chess for 50,000 hours, and I am not becoming a grandmaster —ever, ever, ever.

    DUBNER: You wrote about the Beatles and how one of the key reasons why they became the Beatles was because of the huge amount of time they spent in Hamburg and playing in clubs. This is distilled best by one sentence in Outliers on page 50: “The Hamburg crucible is one of the things that set the Beatles apart.” So Anders, in his book, Peak, and in the interview, took exception with the Beatles example and I’d be curious to run this scenario past you. So he said, I’ll just quote Anders a bit: “So to us” — he and his fellow researchers – “the Beatles, and I think a lot of people would agree, what made them outstanding was their composing of a new type of music. It wasn’t like they excelled at being exceptional instrumentalists. So if we want to explain here their ability to compose this really important music, deliberate practice should now be linked to activities that allow them to basically improve their compositional skills and basically get new feedback on their composition. So counting up the number of hours they perform together wouldn’t really enhance the ability here to write really innovative music.”

    GLADWELL: Oh, I disagree — again, respectfully. I’m understanding I’m disagreeing with someone who knows more about this than me. My sense is that, as someone who is in — here I am about to commit a kind of casual obscenity, but — as someone who is also in the creative business, I think that playing in loud, crowded strip bars for hours on end, starting out with other people’s music covers, and moving slowing to your own music, is an extraordinary way to learn about composition. I know of my own writing, I began as a writer trying to write like William F. Buckley, my childhood hero. And if you read my early writing, it was insanely derivative. All I was doing was looking for models and copying them. Out of years of doing that, emerges my own style. So I would say, to the contrary. When you absorb on a deep level the lessons of your musical elders and betters, in many cases, that’s what makes the next step, the next creative step, possible. I would have a very different interpretation of where creativity comes from than he does. And the other thing I would point out is the Beatles literature predates Ericsson. So, he’s not the first to make arguments about practice. This literature goes back to the ‘60s and ‘70s. So a lot of what I was reading when I was writing that chapter was not Ericsson; it was rather a generation of people in this field that came before him. And they had point out, I think, very, very accurately, that the Beatles experience is really unusual. So people always say, “Well, lots of bands in Liverpool played a lot together.” Actually, they had played together 1,200 times — played live 1,200 times by the time they came to America in 1964. Twelve hundred live performances is a, I’m sorry, absolutely staggering number.

    DUBNER: But the idea may be, presumably, that there could have been another group of four guys, even from Liverpool, who went to Hamburg and played for many, many hours — and played as many hours, but never got good. That’s the kind of hair that I think I’m trying to help you and Anders split.  Because I don’t hear as much disagreement as either of you hear, frankly. What I hear is that you’re more focused on the holistic creation of expertise, and he’s focused more on, I guess, what I would call the more technical version, which has to do with deliberate practice and what it is. And it sounds like he’s saying that 10,000 hours of something isn’t necessarily deliberate practice. And you’re saying 10,000 hours of practice isn’t necessarily deliberate practice, but there are things that happen in that process that you can’t get to without the 10,000 hours anyway.

    GLADWELL: Yeah, and particularly when the four guys who are playing together 1,200 times under very, very trying circumstances are themselves insanely talented, right? So it’s not four schmoes — it’s, for goodness sake, it’s Lennon, McCartney, and Harrison. (I’m not going to mention Ringo Starr.) Each one of whom individually could have had an extraordinary career as a rock-and-roll musician. We had three of them in the same room for years playing together. So there you have this kind of recipe for something extraordinary.

    So this, in the end, is the central puzzle. The talent puzzle – just as puzzling as “which came first, the chicken or the egg?” When we encounter someone who does something extraordinarily well, is it because they are “insanely talented,” as Malcolm Gladwell puts it? Or is it because they had, yes, an adequate measure of baseline ability and then found a way to convert that ability into something extraordinary? And if it’s the latter, can that conversion process be reliably emulated? By people like you and me? By people like the Danish psychologist Susanne Bargmann?

    BARGMANN: I decided to pick up singing because it’s something I really loved to do. I practiced at home. But I mean, I would have to negotiate with my kids how much time would they let sing, because it was really not very nice to listen to. At that point, I was really fascinated by Christina Aguilera. So I decided to start recording myself singing a Christina Aguilera song. What my biggest problem in the beginning was, I couldn’t make the, in lack of better words, the big sound that she makes. So she has this amazing big, loud sound when she sings. And that wasn’t part of what my voice could do. I could make a very soft sound, or I could make a really sharp sound. That’s all I was able to do.

    Bargmann had by now bought into Anders Ericsson’s deliberate-practice model. Which, she acknowledged, required a certain commitment.

    BARGMANN: I decided that if I wanted to be serious about the project, I would need the best coach available. So I went online and then I started searching for the person I thought would be the best coach in Denmark.

    The coach she found was initially reluctant to work with her. But Bargmann explained she wasn’t just pursuing a personal dream; she was exploring the science of expertise.

    BARGMANN: So that was the start. And then, I committed to practicing an hour a day, because I knew the practice was important.

    For a year and a half, Bargmann worked hard, practiced a lot, under the guidance of her coach. She seemed to be making progress, but it was slow.

    BARGMANN: I felt that I wasn’t really improving enough because I didn’t get that big sound that I wanted. And my coach would be cheering for me, and he said, “It’s right about the corner. Just continue.” And then I remember it was summer, and suddenly I was singing, and the sound actually came. And in a song, I was able to make the big sound in a song. And that was a huge jump for me and really, really motivating.

    Bargmann kept at it, practicing every day, focusing on improvement.

    BARGMANN: So the next step was to stand in front of others and sing. And that was tough as well. But it was still a big step to move out of the practice room into performing in front of others and creating music.

    Meaning: writing her own songs.

    BARGMANN: That I worked on for quite a while.

    She started training with other singers.

    BARGMANN: And I think in that process I realized that the next step would be to start recording.

    This phase was also bumpy, but she worked through it.

    BARGMANN: And then I started working with the producer on what is now the music that I’ve released.

    That’s right. Susanne Bargmann finally realized her childhood dream, and she released a record.

    BARGMANN: It’s just called Sus B, which is my artist name.

    In Denmark, she’s gotten a lot of radio play.

    BARGMANN: So actually, the reception has been quite phenomenal.

    Most of the songs are love songs.

    BARGMANN: I don’t know why all good music is about love. And then there’s one song that more embodies the whole project of having the courage to start releasing music. It’s called “Fall Up,” where the message is more, “If you have something that you dream about, then do it, don’t hesitate.”

    Bargmann wants her accomplishment to inspire others.

    BARGMANN: I really believe that it can inspire people to: instead of limiting themselves to what they think they can, to actually choose something they dream of or they have a passion for, and then experience how they can improve.

    [MUSIC: Jessie Torrisi, “Cannonball” (from Shake a Little Harder)]

    Coming up next week on Freakonomics Radio – we’re back with a brand-new episode. It’s a great conversation with Michael Lewis, the author of great non-fiction books that often get turned into great non-fiction movies, including The Big Short, Moneyball, and The Blind Side. His latest book, though, this one is special – at least for me, and probably for a lot of you too. It’s an unbelievably vibrant portrait of Danny Kahneman and Amos Tversky, the two Israeli psychologists whose amazingly creative research led to the field of behavioral economics.

    MICHAEL LEWIS: One of their great discoveries is that people don’t make clean, clear choices between things.

    We talk about their research – and how Michael Lewis writes his books …

    LEWIS: I write with headphones on that just plays on a loop the same playlist that I’ve built for whatever book I’m writing. And apparently I’m sitting there laughing the whole time.

    Michael Lewis on Kahneman and Tversky – that’s next time, on Freakonomics Radio.

    •      *      *

    Freakonomics Radio is produced by WNYC Studios and Dubner Productions. Today’s episode was produced by Greg Rosalsky. Our staff includes Shelley Lewis, Jay Cowit, Merritt Jacob, Christopher Werth, Alison Hockenberry, Emma Morgenstern, Harry Huggins and Brian Gutierrez. If you want more Freakonomics Radio, you can also find us on Twitter and Facebook and don’t forget to subscribe to this podcast on iTunes or wherever else you get your free, weekly podcasts.

    Here’s where you can learn more about the people and ideas in this episode:


    K. Anders Ericsson, Conradi Eminent Scholar and Professor of Psychology at Florida State University

    Steve Levitt, Freakonomics co-author and William B. Ogden Distinguished Service Professor of Economics at the University of Chicago.

    Malcolm Gladwell, author and staff writer at The New Yorker

    Susanne Bargmann, psychologist and musician

    Bob Fisher, soil conservationist, coach, and world record-breaking free-thrower


    Peak: Secrets from the New Science of Expertise by Anders Ericsson and Robert Pool, Eamon Dolan/Houghton Mifflin Harcourt (2016)

    Outliers: The Story of Success by Malcolm Gladwell, Little, Brown and Company (2008)

    “Absolute Pitch” by Diana Deutsch, in The Psychology of Music (2013)

    “A Star Is Made” by Stephen J. Dubner and Steven D. Levitt, New York Times Magazine (May 7, 2006)


    Susanne Bargmann’s music website

    Bob Fisher’s “Secrets of Shooting”

    —Huffduffed by jrsinclair

  7. Big Returns from Thinking Small - Freakonomics Freakonomics

    (Photo Credit: cocoparisienne)

    Our latest

    Freakonomics Radio episode is called “Big Returns from Thinking Small.” (You can subscribe to the podcast at

    iTunes or

    elsewhere , get the

    RSS feed, or listen via the media player above.)

    By day, two leaders of Britain’s famous Nudge Unit use behavioral tricks to make better government policy. By night, they repurpose those tricks to improve their personal lives. They want to help you do the same.

    Below is a transcript of the episode, modified for your reading pleasure. For more information on the people and ideas in the episode, see the links at the bottom of this post. And you’ll find credits for the music in the episode noted within the transcript.

    •      *      *

      Owain SERVICE: Actually, one area in my failure to apply these very same principles was when I was learning to drive.

    [MUSIC: Dorian Charnis, “Snap Jazz”]

    Allow me to introduce … Owain Service.

    SERVICE: I’m the managing director of the Behavioral Insights Team and one of the co-authors of

    Think Small.

    Think Small is a new book — details to come later. And the Behavioral Insights Team is a quasi-governmental unit in Britain more casually known as the Nudge Unit — again, details later. For now, let’s stick to Service learning to drive.

    SERVICE: I learned to drive in the traditional way. I had an instructor who knew the basics and then I took the driving test and I failed.

    So I lined up another test. I did a bit of practicing in between and then failed  again. I actually couldn’t tell you how many times I failed to pass my driving test. I almost blanked it out of my memory, but probably seven or eight times before I eventually passed.

    Stephen DUBNER: Wow.

    SERVICE: Impressive, eh?

    DUBNER: Yeah, yeah.

    SERVICE: And what I should have done, and would have done if I’d have read our book before, was to step back and break down the process of passing a test into a series of different steps. I think one of the things that we often do when we fail at something is to just go gung-ho into trying again. But one of the things that the literature shows — and I know that you’ve explored on this podcast — is that actually you should break down that process and then to focus on the things which need most attention and most work.

    [MUSIC: Jack Wyles, “Thank U Ornette”]

    (Photo Credit: MikeBird)

    That is, rather than focusing on the big goal — learning to drive — you should, in essence, think small.

    SERVICE: And that’s exactly what I didn’t do and was behind the fact that I ended up taking that test many times.

    DUBNER: That’s a very useful story, especially now I know that if you ever offer me a lift I should probably turn it down because, even though you ultimately passed, plainly you have no natural skill for driving.

    Today on

    Freakonomics Radio: Owain Service and his co-author Rory Gallagher, a fellow Nudge-ist, get small on a variety of topics.

    SERVICE: I wanted to buy what I considered to be a very frivolous gift for myself.

    Rory GALLAGHER: I wasn’t morbidly obese but I was definitely packing a few pounds.

    SERVICE: “Nine out of 10 people pay their tax on time.”

    GALLAGHER: And then when I got home, I found out that the board didn’t actually fit in the lift in my apartment.

    SERVICE: What could possibly go wrong?

    And they go big, too — as in government big:


    One of the dirty secrets of government is actually that we don’t know whether what we’re doing works a lot of the time.

    •      *      *

    We’ve just met Owain Service.

    DUBNER: Owain Service is a nice aptonym for someone in public service. Do you think your name played a role in your destiny?

    SERVICE: It’s a good question from one of the authors of


    It’s something I thought about a lot, and I’ve tended to be at the end of many people’s amusing jokes and anecdotes. But the reality is that you can get the word “service” into so many different contexts. I could have been pre-ordained to take a number of different routes, the National Health Service or Her Majesty’s Secret Service. The list is a long one. But ultimately public service is where I found my place.

    Service is managing director of the Behavioral Insights Team, based in London. As for his co-author, Rory Gallagher …

    GALLAGHER: I head up the Behavioral Insights Team’s operations in the Asia-Pacific and I’m based in Sydney.

    The team’s mission is to design policy interventions based on a scientific understanding of human behavior. Recently, for instance, hoping to fight antimicrobial resistance, they tried to persuade some prescription-happy doctors to go easy on the antibiotics.

    GALLAGHER: We had a very simple intervention, which was to write to those doctors who were prescribing the highest amounts of antibiotics and just let them know that they were in that top cohort, and things that they could do to avoid over-prescription. It was primarily about feedback, about where they sat compared to their peers, but also a set of specific actions that they then could take.

    DUBNER: And how effective was it?

    GALLAGHER: It was very effective. Over the six-month period of the trial, GPs [general practitioners] who received that specific letter prescribed an estimated 73,000 fewer antibiotic items than those who didn’t receive the letter.

    Gallagher and Service — behavioral-science investigators by day — became behavioral-science practitioners by night; at home, with their families, trying to work out their own issues. They became their own guinea pigs, distilling the insights from big government policies into a self-help manual called

    Think Small.

    SERVICE: This is all about taking a long-term goal and then breaking it down into a series of manageable steps.

    GALLAGHER: And unless you get those details right, being very clear about what it is you’re trying to achieve, by when and how, you actually won’t get over those initial hurdles.

    Gallagher and Service both came from academic backgrounds …

    GALLAGHER: I did a Ph.D. in the social sciences, which led to behavior change and health promotion in Southeast Asia.

    SERVICE: I studied subjects called “social and political sciences” at Cambridge. At the time, there wasn’t really anything like it in the in U.K.

    GALLAGHER: I got a bit frustrated that I probably wouldn’t be able to have the social impact that I wanted.

    SERVICE: It blended them together in a relatively unique way. And strangely enough, David Halpern taught social psych on that program too. That’s where we originally met.

    David HALPERN: I used to teach psychology at Cambridge.

    And that is David Halpern.

    HALPERN: I was lifetime-tenured, in fact, at Cambridge.

    SERVICE: What ultimately happened was that David had been pursuing an academic career but then became a bit frustrated in terms of the policy applications of academia.

    Halpern didn’t want to be just another academic writing papers that stayed in academia. He wanted to see smart behavioral research applied to actual policy.

    HALPERN: You might think, “Well, why wouldn’t it be that? A more realistic model of human behavior should surely be at the heart of thinking about policy in government.” Traditionally, that hasn’t been true.

    In 2001, Halpern left Cambridge.

    SERVICE: He then joined a unit that was set up in central government called the

    Prime Minister’s Strategy Unit.

    Owain Service again.

    SERVICE: Which worked on long-term strategic thinking for the U.K. government. I often describe it as a bit of an in-house think tank or management consultancy for government. And that’s where I found my first role in public service.

    Rory Gallagher also joined the Strategy Unit. Its remit wasn’t specifically geared towards behavioral science, but David Halpern began pushing his cause. Rather than trying to legislate pro-social behaviors — like saving for retirement or eating healthier — he argued that finding a way to nudge people toward these behaviors would be less punitive and more cost-effective. This idea, popular in academia, was quite new to government. Halpern left the Strategy Unit in 2007, but returned to government three years later. Now there was a new prime minister, David Cameron. Owing in part to Cameron’s personal enthusiasm, the Behavioral Insights Team was born, and set up shop right there in No. 10 Downing. It came to be called the Nudge Unit after

    the book of that name by the American academics Richard Thaler and Cass Sunstein.

    SERVICE: What happened in 2010 was that there was a realization that there was really something about the world of behavioral economics and psychology that could be usefully brought to bear in public policy-making. So it was a coming together of a number of different routes that at the time started to enable us to say: We could create an institution out of these ideas rather than just think about them in application to ad hoc policy.

    DUBNER: How would you describe overall the mission or missions of the Behavioral Insights Team?

    GALLAGHER: The Behavioral Insights Team was created to spread a more nuanced understanding of human behavior into government policy. David Halpern says one of the dirty secrets of government is actually that we don’t know whether what we’re doing works a lot of the time. That, for me, was a real revelation.

    This admission — that the government really had no idea whether its programs worked, whether it’s spending was justified — suggested a new approach to policy-making. An approach that leaned heavily on running randomized control trials — real experiments — on a small scale before running off to spend big piles of money.

    SERVICE: I remember one early paper that I coauthored called “Test, Learn, Adapt,” which is all around running randomized-control trials as part of the public-policy making process. And at the time this was published, it was almost unheard of for a central government to be publishing a paper along those lines. But it ended up being one of the most widely read publications that the Cabinet Office — which is the central government department that coordinates the actions of other departments — had put out in that year. It did really feel like there was this major sea change in thinking that took place that was symbolized in the institution that was their Behavioral Insights Team.

    DUBNER: Given the potential for controversy of something called the Nudge Unit within the government with its nod toward — as Richard Thaler, one of your intellectual patrons, calls it —“libertarian paternalism,” people could definitely get a little bit nervous about that. I gather that you were interested in some good early wins. Solid victories, as small as they may be that would indicate to the public and the press and the rest of the government that what you were doing was not some strange Big Brother subterfuge.

    SERVICE: You’re absolutely right. It was quite important for us to be able to demonstrate in those early days that the principles that we might be applying could actually have an effect in practice. And the reality was that, back in 2010, we didn’t really know ourselves how effective they might be at scale in a policy context. We made this conscious effort to pick off a few areas that would be relatively uncontroversial, where you could show that small changes to the way that you run a policy or an intervention or a public service could have a disproportionate impact. The earliest example of this, which ended up becoming like the archetypal example our work, was the work that we began with the tax authority in the U.K.

    While most people in the U.K., as in the U.S., have their taxes withheld automatically, that’s not the case for others: the self-employed, company directors, and so on.

    SERVICE: There are about nine million of them. And very often these people fail to meet their deadlines. And the particular group of people in some of these earliest trials of which there were probably around 40 to 45,000 individuals. And each of them owe on average around £5,000 pounds. So we’re talking about fairly substantial sums of money.

    To those delinquent taxpayers, the Nudge Unit invoked what psychologists call “social norms.”

    SERVICE: We are influenced by what we see other people around us doing. And very often we underestimate the good behavior of others and drawing attention to what positive behaviors other people are actually doing, as opposed to what we perceive them to be doing, can have a strong positive impact.

    For instance, the fact that most people pay their taxes on time is a social norm. So the Nudge Unit sent out a number of letters, with different wordings, to see which one would best persuade the tax delinquents to pay up. The winner? No threats, no cajoling. It simply read: “Nine out of ten people in the U.K. pay their taxes on time. You are currently in the very small minority of people who have not paid us yet.”

    DUBNER: Talk for a moment about the magnitude of of the effect of that first trial?

    SERVICE: The effects actually were surprising, even for us. The highest performing letter gets you about a five percentage point increase in payment rates within the deadline.

    DUBNER: Five percentage points translating to what?

    SERVICE: To about a 15 percent increase in payment within, I think, 23 days of sending a letter.

    DUBNER: Wow, at the cost of close to zero?

    SERVICE: Yeah, close to zero. That was what really started to pique the interest of other government departments.

    Indeed, the Nudge Unit was beset by requests for help — from within the British government and elsewhere. It began to consult with a number of governments, and set up some satellite offices — in New York, Singapore, and Sydney. It inspired a similar unit in the States — we covered that in an earlier

    Freakonomics Radio

    episode called “The White House Gets Into the Nudge Business” — although, we should note, that was the Obama White House. The Trump White House isn’t very into the nudge business, at least not yet. The original, British Nudge Unit, meanwhile, has ensured its longevity by going quasi-private:

    GALLAGHER: The unit is co-owned by the U.K. government, the cabinet office; Nesta, a social-innovation charity; and the employees.

    And the Nudge Unit’s success led Rory Gallagher and Owain Service to want to spread the gospel via their book,

    Think Small.

    SERVICE: It’s not just about public policy, but it’s about what you can do in your personal lives and your work lives. And that we felt was a missing space.

    Coming up after the break: the kind of personal goals Service and Gallagher pursued:

    GALLAGHER: I moved to Bondi Beach, a beautiful part of Sydney. And like all ex-pat Brits, I had the dream of wake up in the morning and going surfing every morning before work.

    And what they learned about such pursuits.

    SERVICE: It takes some effort. And it’s particularly tricky to do that once you’ve had a glass of wine.

    •      *      *

    (Photo Credit: what’sthefrequency via Visualhunt/CC BY-NC)

    Owain Service was by all appearances a most respectable citizen. Educated at Cambridge, now a senior official with the British government’s Behavioral Insights Team. He was, however, drinking a bit too much …

    SERVICE: I realized that I had slipped into a bit of a routine in which I would come home from work, and I would generally be quite tired. And I’d feel like I deserved a glass of wine. So I’d maybe crack open a bottle of wine. Have a small glass. Prepare some dinner with my wife. Pour her a glass, maybe pour myself another glass over dinner. And over time I realized that I was slipping into what in the U.K. is referred to as “middle class drinking.” Which is when drinking becomes more of a habit than a treat.

    The U.K.’s chief medical officer had released new guidelines recommending that adult men drink an average of no more than 3 to 4 units of alcohol per day.

    SERVICE: But the reality is that a rule of that kind is actually quite difficult for somebody to apply in practice. What is a unit of alcohol? You can work it out, but it takes some effort. It’s particularly tricky to do that once you’ve had a glass of wine. I did think twice about including this in the book. I think it’s actually a really nice illustration of the practical effects that this thinking-small approach.

    The “thinking-small approach,” as codified by Service and Rory Gallagher, calls for a seven-step path to problem-solving. The first is to set a goal. In this case — middle class drinking — that was pretty simple. Service wanted to cut back. The next step? Make a plan.

    SERVICE: One of the key rules in the “plan” chapter is that if you really want to achieve something, you need to think about how you can make it simple for yourself to do the thing that you want yourself to do.

    The Chief Medical Officer’s advice on drinking — no more than 3 to 4 units per day — was anything but simple. So Service’s plan included what’s called a bright line.

    SERVICE: The principle of a bright line is that you set yourself a very clear rule that is obvious if you have transgressed. If your rule is, “Don’t drink more than three to four units of alcohol on average per day,” it’s quite difficult to know whether you stepped over that rule. The bright line that I set myself was no drinking during the week at home. It would be really obvious to me if I then stepped over that rule. And it resulted in me actually almost completely cutting out alcohol for my weekly routine.

    DUBNER: And that was a positive effect for you, yes?

    SERVICE: It was, actually. I estimated that I’ve drank about 80 fewer bottles of wine, although it’s probably a bit more now.

    [MUSIC: Disk Eyes, “By The Slice”]

    (Photo Credit: fancycrave1)

    After the goal and the plan comes step three: making a commitment. Let’s say, for instance, you’ve made a commitment to exercise. Rory Gallagher, before he moved to Australia, had set a goal of keeping fit. He made a plan by joining a gym near his home in London. But he almost never went to the gym.

    GALLAGHER: I was often working late in the office and by the time I commuted back, I was too tired to go to the gym. In this case, I thought the simple fix here would be simply to move that gym, rather than being where I lived, to being at work. And actually I found that didn’t quite have the effect that I had hoped it’d have. Because it was right on the doorstep of my work, I could always put it off for the next day.

    Then, Gallagher had an idea. He commandeered the white board in the Nudge Unit’s office.

    GALLAGHER: And I wrote up very clearly for all of the team to see that I would go to the gym twice a week for the next three months.

    Voila: a commitment, made in public, which increased the stakes. He didn’t want his colleagues to see him fail. But there was another component: a “commitment referee,” someone willing to keep you on task. Gallagher says that significant others are terrible referees. When you don’t feel like following through, they’ll often conspire with you to let you off the hook. Instead, he turned to a colleague: Owain Service.

    GALLAGHER: I asked Owain to be my commitment referee, to see if I actually followed through on this. But I also wanted to use a sort of reward to help give me that turbo boost or an incentive.

    A reward. That’s the fourth step in thinking small.

    SERVICE: Reward is about putting something meaningful at stake, using small rewards to build good habits.

    For instance: if you go to the gym, as planned, you’ll allow yourself to binge-watch your favorite TV show while working out. We explored this pairing in an earlier episode — it’s called “temptation bundling.”

    Katherine MILKMAN: So what if you only let yourself get a pedicure while catching up on overdue emails for work.

    That’s Katie Milkman, from the University of Pennsylvania, who coined the phrase.

    MILKMAN: Or only let yourself go to your very favorite restaurant whose hamburgers you crave while spending time with a difficult relative who you should see more of. Those would all be examples of temptation bundling.

    In these cases, there’s a positive reward attached to a behavior that’s hard to commit to. You could, of course, also attach a punishment instead of a reward. That’s what Rory Gallagher decided to do with his exercise commitment. And he picked the worst possible punishment he could imagine:

    GALLAGHER: I said that if I didn’t follow through on that commitment if I didn’t meet that goal that I would wear the shirt of the enemy team

    Arsenal, to the office. Owain happened to support them at the time.

    DUBNER: You’re a

    Spurs supporter, I understand?

    GALLAGHER: I am, that’s correct.

    DUBNER: And so you hate Arsenal with every fiber in your body?

    GALLAGHER: I wouldn’t go that far, but they’re certainly not our closest friends.

    DUBNER: Do you think the punishment of possibly wearing the Arsenal jersey was what actually was most successful in urging you to follow through on your commitment?

    GALLAGHER: I actually think it was a combination of things. First was actually being very specific about what my goal was. Before that, it was to get fit or go to the gym. But I didn’t have a specific goal. And in this case, this was to go twice a week for three months. I then made it very public and accountable, so rather than just thinking in my own head, it was up in the middle of the office for everyone to see. And each week I had to tick off which days I’d been to the gym. So it was a public accountability element to it. And then finally there was the punishment waiting for me at the end which I couldn’t bear to face.

    DUBNER: And how often did you then go to the gym during those three months?

    GALLAGHER: I did manage to stick to that. In fact, often more than twice.

    So Gallagher never had to put on that Arsenal jersey — which, as a Spurs supporter, was its own reward. But, he says, it’s important to note that rewards can often backfire. Even financial rewards.

    Consider a study by the economists Bruno Frey and Felix Oberholzer-Gee. It concerned the Swiss government’s plan to build a facility to store nuclear waste. In surveys, roughly half of Swiss respondents said that despite the risk, they’d be okay with having the facility in their community. When asked why, they articulated a sense of civic duty. But then, when the economists reframed the survey, attaching a reward of several thousand dollars per person, per year, for living near such a nuclear-waste facility, they got a surprising result.

    GALLAGHER: What they found is when they offered financial compensation with that, people’s willingness to accept that actually went down. And we see that in lots of other areas as well. There’s really interesting work going on at the minute to try to understand what motivates people, for example, to give blood. And the common hypothesis and some of the evidence seem to be that actually you need to appeal to people’s sense of reciprocity and social good in order to encourage people to give blood, and actually putting any sort of financial reward or prize around that would actually squeeze out those intrinsic motivations.

    So the “reward” step of thinking small can obviously be tricky. The next step — less so. It calls for creating leverage by sharing your goal with others.

    SERVICE: Share is about asking for help, tapping into your social networks and then using group power.

    Step six? Feedback.

    SERVICE: Feedback is about knowing where you stand in relation to your goal. It’s about making it timely and focused on effort and comparing your performance to those of other people. It’s no good to just say, “How am I doing?” You need specific, actionable feedback that enables you to do something with that feedback.

    And finally, once you’ve mastered the first six steps, comes No. 7: “stick.”

    SERVICE: Stick is about practicing with focus and effort. It’s about testing and learning and it’s about reflecting and celebrating success.

    [MUSIC: Steve Rice, “Broad Street Bebop”]

    This notion is probably familiar to most of you. It too showed up in a couple of our previous podcasts — specifically “How to Become Great at Just About Anything,” which featured the research of Anders Ericsson, and “How to Get More Grit in Your Life,” which featured the work of Angela Duckworth.

    Angela DUCKWORTH: That’s right. I want to redefine genius, if you will. I want to define genius as greatness that isn’t necessarily effortless, but in fact greatness that is earned however you do earn it.

    In Duckworth’s reckoning, that means finding a way to be resilient, to push through the inevitable failure that accompanies experimentation and growth. Rory Gallagher had one such failure when he first moved to Australia and found himself living next to a great surfing spot.

    GALLAGHER: I moved to Bondi Beach, beautiful part of Sydney. And like all ex-pat Brits, had the dream of wake up in the morning and going surfing every morning before work. But goals are hard to achieve. And even with these tools, it’s not absolutely guaranteed that you’re going to get it right. You’re not gonna reach those goals unless you get the details right. In this case, I got the details wrong. First I said, “I’m going to commit to going surfing once a week. I’m going to buy a board and going to see if I can go with a friend.” But I got crucial details wrong in each of those aspects.

    First was, although I made a commitment, I didn’t say specifically what day. And partly because surfing depends on the conditions, how big the waves or how small, it was very easy for me to say, “Well, the conditions aren’t quite right today. I’ll go another day.” Because I was doing other things like going for beach runs and cycling, I was able to sort of give myself the fallback that, “I’m doing other fitness work so it doesn’t really matter.”

    Second of all, I didn’t test and learn. So what I really should have done is try to find out what type of board would work for me. But I dove in straight away, got this huge 10 foot board that I bought off GumTree. Very excited to get going. And then when I got home, I found out that the board didn’t actually fit in the lift in my apartment.

      So in order to even get out the building, I required my partner to help me down this very tight stairwell, which somewhat took out the sort of spontaneity of waking up and going surfing when you require a hand just to get out of the building.

    And third of all, I actually went surfing with a friend of mine, Jack, who was actually a pretty decent surfer. When we get down he just swam out to the back of the line up and start with these big waves. And I’d be floundering around in the wash. In this case, I really should have just joined a surf school or found someone else who was learning to surf and learned with them.

    DUBNER: I have to say, it’s very comforting that someone as smart and experienced as yourself can fail so badly.

    GALLAGHER: Of course. Achieving goals is a difficult thing. And it’s important to recognize when things aren’t working. And in this case, I recognized look, there are other things going on at the moment, that this wasn’t going to be something I could spend the time and effort to do. And actually I should focus on other parts of my life, which I felt were going to make me happier ultimately.

    DUBNER: Look, I love the work that you’re involved in and I find it exciting and I think it’s revolutionary, frankly. That said, a lot of the solutions that have been proposed — that social science researchers come up with and that you then integrate — many of them strike me as essentially common sense. If you want to accomplish this behavior, you need to make some kind of commitment device, or if you want to tackle a big, broad, complex abstract problem, you need to think small and take small steps. Talk about the degree to which you’re not merely like a lot of academia does, canonizes or makes formal what people in the real world have known for millennia.

    GALLAGHER: I think you’re right. We see this as applied common sense, but unfortunately it isn’t applied anywhere near commonly enough. Many people recognize the sorts of tools. But what this tries to do is help systematize that so they can apply it routinely in everyday life. And to take one of your examples around commitment devices — I mean I’m not sure people do realize how powerful they can be and often they get them wrong.

    So just telling someone that you want to do something, people might see that as a commitment device. “I want to recycle more.” “I want to write a novel.” And just saying that publicly is my form of commitment device to the fact I’m going to follow through. If you do it in that very vague and open and public way, actually that has no effect at all and can actually backfire because you get a bit of a warm glow just by telling people of your good intentions. So in order for a commitment device to be effective, you need to make it specific. You need to write it down and make it accountable with a referee. So those small details can make all of the world of difference between people thinking they’re using these tools and actually potentially backfiring and using them in the way that they’re intended and having the outcomes that we want.

    [MUSIC: John Swanson, “The Cougar That Got Away” (from Delta Blues 3)]

    That was Rory Gallagher and his Nudge Unit colleague and

    Think Small

    coauthor Owain Service. Coming up next week on

    Freakonomics Radio — we continue this conversation — and expand it. Really, really expand it. We’ll hear from grit champion Angela Duckworth again.

    Angela DUCKWORTH: The one problem that really confronts humanity in the 21st century is humanity itself.

    And the temptation bundler Katie Milkman too:

    Katherine MILKMAN: The biggest problem that needed solving was figuring out how to make behavior change stick.

    Milkman and Duckworth are putting together a massive project with all kinds of scholars and all kinds of partners — banks and schools and fitness centers and drugstores. Their mission: to take everything that’s been learned so far in places like the Nudge Unit, in academic research departments, and apply it to one problem.

    MILKMAN: A problem that, if we fixed it, could truly solve every social problem we could think of.

    That’s next time, on

    Freakonomics Radio.

    Finally, a quick reminder: If you donate to support the creation of more episodes of

    Freakonomics Radio before April 16th, you’ll be entered to win an all-expenses paid trip for two to New York City. You and a friend will come take a tour of the station and have lunch with me and some of the team here at the

    Freakonomics Radio at a restaurant of your choosing. You don’t have to pledge to enter, but we really hope you will. Just visit our website and click on donate. Or text the word “Freak” to 6-9-8-6-6 to get started. And remember to do it before April 16th. Thanks!

    •      *      *

    Freakonomics Radio is produced by WNYC Studios and Dubner Productions. This episode was produced by Christopher Werth. Our staff also includes Shelley Lewis, Merritt Jacob, Greg Rosalsky, Stephanie Tam, Eliza Lambert, Alison Hockenberry, Emma Morgenstern, Harry Huggins, and Brian Gutierrez. You can subscribe to

    Freakonomics Radio on iTunes, Stitcher, or wherever you get your podcasts

    Here’s where you can learn more about the people and ideas in this episode:


    Angela Duckworth, professor of psychology at the University of Pennsylvania; founder and CEO of Character Lab.

    Dr. Rory Gallagher, managing director the Behavioral Insights Team‘s international programs in the Asia-Pacific, co-author of Think Small (Michael O’Mara 2017).

    Dr. David Halpern, chief executive of the Behavioral Insights Team.

    Katherine Milkman, associate professor of Operations, Information and Decisions at the Wharton School of the University of Pennsylvania.

    Owain Service, managing director of the Behavioral Insights Team, co-author of Think Small (Michael O’Mara 2017).


    “How to Become Great at Just About Anything,” Freakonomics Radio (2016).

    “How to Get More Grit Into Your Life,” Freakonomics Radio (2016)

    Think Small by Dr. Rory Gallagher and Owain Service (Michael O’Mara 2017).

    “Test, Learn Adapt: Developing Public Policy with Randomised Controlled Trials,” by Ben Goldacre, Laura Haynes, Owain Service, David Torgerson (2012)

    “The White House Gets Into the Nudge Business,” Freakonomics Radio (2016).


    “The Cost of Price Incentives: An Empirical Analysis of Motivation Crowding-Out,” by Bruno S. Frey and Felix Oberholzer-Gee (1997)

    Inside the Nudge Unit: How Small Changes can make a Big Difference by David Halpern (WH Allen 2015)

    Nudge by Richard H. Thaler and Cass R. Sunstein (Penguin Books 2009).

    —Huffduffed by jrsinclair

  8. Geza Mihala - Codefin, or the fine art of knowing what to do and when and why

    Don’t you hate it when people always give the same advice, no matter the circumstances? Don’t you observe this way too often in software development? Don’t you want to know why this happens and what you can do about it? Then join us on a journey, starting with our strategies of dealing with cognitive load, over an in-depth look at some of the key practices in programming, to arrive at applying the Cynefin sense making framework to the various aspects of software development. So that you’ll always know what to do and when and why.

    Original video:
    Downloaded by on Thu, 25 May 2017 05:29:00 GMT Available for 30 days after download

    —Huffduffed by jrsinclair

  9. Complexity, context and collaboration from manufacturing software… by Dave Snowden

    Manufacturing practices abound at Agile conferences but is software something that comes of a production line (albeit in small batch sizes)? Should we instead think of software development as a service? Can we combine unarticulated customer needs with novel technology to shift the business forward rather than simply respond to demand? Should we see applications as emergent properties of complex interactions between people and software objects over time? What sort of ecosystem (rather than architecture) is needed for such an approach? Using the popular Cynefin framework as a starting point, its creator will explore these questions.

    Dave Snowden slides can be found at

    Original video:
    Downloaded by on Thu, 25 May 2017 04:56:59 GMT Available for 30 days after download

    —Huffduffed by jrsinclair

  10. Agile Maine Day - Dave Snowden: New Theory of Change

    Agile Maine Day - Dave Snowden: New Theory of Change

    Most approaches to cultural change, mind change, strategic alignment and the like start with goals and then seek to motivate individuals and teams to achieve set goals. Based on over five-teen years of work and practice, originating in DARPA programmes before and after 911, this session will make attendees aware of the potential to map patterns of day to day observations and water cooler stories to measure the realities of organisational culture. Fractal engagement then uses landscape representation of those narratives to allow intervention to take place in context at all levels from board to team. The question What can I do tomorrow to create more stories like these, fewer like those is a transforming and empowering question that avoids the prescription and elitism of abstract goals and values.

    Original video:
    Downloaded by on Thu, 25 May 2017 04:57:06 GMT Available for 30 days after download

    —Huffduffed by jrsinclair

Page 1 of 86Older