How Managers Can Curb Overconfidence
September 25, 20182.2K views0 comments
Daniel Walters
Taking the time to consider unknowns helps executives make better decisions.
In Thinking, Fast and Slow, Nobel Prize in Economics winner Daniel Kahneman called overconfidence “the most significant of the cognitive biases” because it is ubiquitous and has many negative consequences. For example, overconfidence has been implicated in a wide range of errors in decision making, from medical misdiagnoses to overtrading in the stock market to misallocation of corporate investment to economic recessions. Overconfidence has also been suggested as a contributing factor to many infamous disasters, such as the Chernobyl meltdown, the sinking of the Titanic and the Deepwater Horizon oil spill.
Understanding the sources of overconfidence and developing effective techniques to improve calibration – knowledge about accuracy – has been the subject of a great deal of research. In work I’ve conducted with Philip Fernbach (University of Colorado Boulder), Craig Fox (UCLA) and Steven Sloman (Brown University), we have developed a new technique to reduce overconfidence by prompting people to explicitly consider the missing pieces of information in a judgment. Our paper, “Known Unknowns: A Critical Determinant of Confidence and Calibration”, was published in Management Science.
Reining in overconfidence
In the first of our three studies, participants answered ten general-knowledge, multiple-choice questions and indicated their level of confidence for each. They were also asked to list the reasons that made them more as well as less confident about their answers. These reasons were later rated on the degree to which they concerned either known or unknown evidence. For example, if the question was “Does a Subway meatball sandwich or McDonald’s Quarter Pounder with cheese have more calories?”, not knowing the number or size of meatballs in the Subway sandwich could be a reason for being less confident that involved unknown evidence.
The average confidence rating of participants (67 percent) overshot their accuracy (62 percent). However, those participants who paid more attention to unknown evidence when rating their confidence were better calibrated in their assessment, and no less accurate.
Our second study involved participants who answered multiple-choice questions. We asked a group of them to specify two pieces of missing information that would have helped them determine the correct answer to each question. We instructed another group to write down two reasons why an answer they didn’t select could, in fact, have been the correct one. In other words, the first group considered the unknowns, while the second one considered the alternative, a technique also known as playing “devil’s advocate”. Control participants merely answered the questions, stating their level of confidence for each answer.
While both considering the unknowns and playing devil’s advocate reduced overconfidence, considering the unknowns was more effective. It resulted in an 8 percentage point decrease in overconfidence relative to the control group (16 vs. 24 percent) whereas considering the alternative only resulted in a 6 percent decrease from the control group.
The third study allowed us to test whether considering the unknown reduced confidence or improved calibration. In many domains, people demonstrate underconfidence and are overly cautious. A true improvement in calibration would mean that considering the unknowns reduces confidence when people are overconfident, but not when people are wellcalibrated or underconfident. In this study, participants answered two sets of general knowledge questions. The questions were divided into nine knowledge domains (e.g. state populations, calorie counts), for which participants varied in their level of overconfidence versus underconfidence. As in the second study, participants either considered the unknown, or considered the alternative (the devil’s advocate technique). Both interventions were compared with a group which had no prompting to ponder additional information.
As we predicted, considering the unknowns only reduced confidence when it was misplaced (in overconfident domains), whereas playing devil’s advocate had an equal impact in the subject areas that encouraged overconfident and underconfident responses. The figure below shows confidence and overconfidence for each of the two techniques across different domains.
Striking the right balance
A great deal of research on overconfidence has attributed this phenomenon to confirmation bias, the systematic tendency to seek or overweight evidence for a preferred hypothesis over its alternatives. However, our research shows that the classic devil’s advocate technique can be a blunt instrument: When people start considering all the reasons they could be wrong, some lose confidence unnecessarily. If their assessments had been well calibrated to begin with, prompting people to secondguess themselves can lead them to underconfidence. For instance, consider a CFO evaluating a potential acquisition. While no shareholder wants the CFO to be overconfident, underconfidence may be equally costly and result in missed opportunities.
In our view, overconfidence often arises when people neglect to consider the information they lack. Our suggestion for managers is simple. When judging the likelihood of an event, take a pen and paper and ask yourself: “What is it that I don’t know?” Even if you don’t write out a list, the mere act of mulling the unknowns can be useful. And too few people do it. Often, they are afraid to appear ignorant and to be penalised for it. But any organisation that allows managerial overconfidence to run amok can expect to pay a hefty price, sooner or later.