Schedule - Parallel Session 1 - ConfidenceIDL Boardroom - 11:00 - 12:30
Uncertainty Regulation Theory: How the Feeling of Uncertainty Shapes Decision Making
Daniel Navarro-Martinez; Jordi Quoidbach
We present a theory of decision-making behavior based on the idea that when people find themselves in a situation in which they have to make a choice, they experience to some degree a feeling of uncertainty. Regulating that feeling is proposed to be a fundamental motivation in decision-making processes. We show how such an uncertainty-regulation account can explain and unify a large number of well-known decision-making phenomena that have been given diverse explanations, such as risk aversion, temporal discounting, loss aversion, ambiguity aversion, or social-influence effects. Our theory does also make unique predictions about how those phenomena are affected by the feeling of uncertainty experienced by people in decision situations. Those predictions provide guidelines to modifying decision-making patterns (sharpening or mitigating them) by influencing feelings of uncertainty. Such guidelines can have direct applications in a variety of environments in which people make choices, such as household decisions, consumer decisions, decisions made in organizations, etc. In particular, according to our theory, making consumers or members of an organization feel more uncertain when they are in situations in which they have to decide is likely to result in more risk aversion, more present-biased choices, more loss aversion, and being more influenced by other people. On the other hand, making them feel less uncertain is likely to mitigate those patterns. We also present three experiments that test some of the main ideas behind our theory and demonstrate that feelings of uncertainty do indeed play a significant role in decision-making processes. The first experiment shows that the potential to generate feelings of uncertainty of the available alternatives, and also people’s sensitivity to that uncertainty, predict decision behavior in line with the theory. The other two experiments demonstrate that manipulating the feeling of uncertainty experienced by people significantly affects their decisions as predicted. The experiments touch on a variety of areas of decision making, which also demonstrates that feelings of uncertainty are a relevant explanatory factor across different types of choices. Overall, the theory and evidence presented here suggest that behind a number of the multiple decision-making phenomena and biases uncovered in the last decades may lie a fundamental human drive: the desire to avoid feeling uncertain. Our work provides a foundation to analyze decision making from that point of view.
Does Overprecision Correlate with Trading Tendency? Alternative Approach
Doron Sonsino; Levkowitz Amir
The experimental evidence regarding the link between tendency for overprecision (Moore and Healy, 2008) and excessive trading is mixed. Glaser and Weber (2007), for instance, show that overplacement (“better than average”) significantly correlates with trading frequency, while various measures of miscalibration do not exhibit such link. Deaves et al. (2009), on the contrary, find significant correlation between task-relevant miscalibration scores and frequency of trading in experimental asset markets, also claiming a negative overconfidence effect on trading performance. Fellner-Röhling and Krügel (2014) recently illustrate that the tendency to overweight private information in signal detection tasks predicts trading intensity in experimental asset markets. This intriguing result motivates further examination of the link between overprecision and trading. The proposed paper introduces an innovative overprecision task and tests its predictive power for personal trading in framed-field experimental tasks. MBA students and traders recruited in financial Web forums were first instructed to submit median predictions for the future performance of leading stocks (r). Subjects subsequently assessed the likelihood that the uncertain future return would be higher than (r plus some constant k) and similarly assessed the probability that the uncertain return would be smaller than (r minus constant k). The likelihood assessment tasks were incentivized using binary quadratic scoring rules that have proved to prevent the bias that standard QSR may induce. The tail-events likelihood assessments were normalized to capture personal tendency for informational confidence (Sonsino and Regev, 2013). The experiment was run in two phases, few weeks apart. Subjects submitted point predictions and tails likelihood assessments for 3 familiar stocks at the first phase. This was followed by a second questionnaire consisting of 3 framed-field stock trading assignments. Again, the tasks were carefully designed to neutralize the effect that personal risk preference or diversification concerns may exhibit on the tendency to trade. The results of the experiment, in terms of the correlation between informational confidence and trading, were basically weak, but the disappointing findings could follow from too ambitious aspects of the design (random drawing of the stocks for the assignments, on individual basis, to enhance ecological validity). The hypotheses would be tested again in a followup (less ambitious) experiment, attempting to control noise and gain more power for testing the link between overprecision and inclination to trade stocks.
Confidence Biases and Learning Among Intuitive Bayesians
Louis Levy-Garboua; Muniza Askari; Marco Gazel
In many circumstances, people appear to be “overconfident” in their own abilities. We are concerned with how people overestimate, or sometimes underestimate, their own ability to perform a task in isolation. We reconcile the cognitive bias interpretation of overestimation with recent literature on learning to be over (under)-confident. To this end, we design an incentivized real-effort experiment which is an experimental analog to the popular “double-or-quits” game. 410 subjects perform a task that becomes increasingly difficult –i.e. risky- over time. “Doublers” could substantially increase their gains if successful but they would lose part of their earnings and step out of the game if they failed. By comparing, for three levels of difficulty, the subjective probability of success with the objective frequency at three moments before and during the task, we examine the speed of learning one’s ability for this task and the persistence of overconfidence with experience. We conjecture that subjects will be first underconfident when the task is easy and become overconfident when the task is getting difficult, which is the hard-easy effect. However, a task that a low-ability individual finds difficult may look easy to a high-ability person. Thus, we should observe that overconfidence declines with ability, which is the Dunning-Kruger effect. The gradient of task difficulty was manipulated after completion of level 1, defining two different tracks with the same requirement to reach the highest level. A third treatment was also considered in which subjects could choose their preferred track. We find that people on average learn to be overconfident faster than they learn their true ability. We present a new “intuitive-Bayesian” model of confidence which, while resting solely on a Bayesian representation of the cognitive process, describes the behavior of subjects who are myopic, poorly discriminate, and thus make measurement errors. Above all, a persistent doubt about their true ability is responsible for their perception of (available) contrarian illusory signals that make them believe, either in their failure if they should succeed or in their success if they should fail. We show that limited discrimination of objective differences and myopia can be responsible for large prediction errors which learning should reduce. However, the fundamental uncertainty about one’s true ability causes systematic and robust confidence biases, namely the hard-easy effect, the Dunning-Kruger effect, conservative learning from experience and the overprecision phenomenon if subjects act as Bayesian learners. Moreover, these biases are likely to persist since the Bayesian aggregation of past information consolidates the accumulation of errors, and the perception of illusory signals generates conservatism and under-reaction to events. Taken together, these two features may explain why intuitive Bayesians make systematically wrong predictions of their own performance.
Miscalibration of Probability Intervals for Events Involving Aleatory and Epistemic Uncertainties
Saemi Park; David Budescu
When judges are asked to estimate probability intervals (PIs) of unknown quantities, they often do not adjust their estimates to match the prescribed probability levels. Typically the 90% subjective PIs are too narrow. This has been interpreted as evidence of over-confidence, but recent research has shown that 50% and 90% PIs are indistinguishable, and concluded that these PIs are not a proper way to diagnose over- or under- confidence. Most studies employed items involving epistemic (internal) uncertainty that reflects incomplete and imperfect knowledge of the judges, and asked for only one PI for every item. Budescu and Du (2007) asked for multiple PIs from every judge, and found better calibration. Teigen & Jorgenson (2005) predicted that judges would be better calibrated when generating intervals that evoke aleatory (external) uncertainty. We examine if the (in)sensitivity to the target confidence level (90% or 50%) varies across the two types of uncertainty (aleatory and epistemic), and whether elicitation procedures that requires multiple judgments are superior to one”“shot elicitations. Participants were randomly assigned to aleatory or epistemic uncertainty conditions and were presented with yoked items. For example, in the aleatory condition judges generated 50% or 90% PIs for life expectancy across all 193 countries in the UN, and in the aleatory condition they judged PIs for the life expectancy in Brazil (the median value in this distribution). Judges provided two PIs for 20 different items: In the experimental conditions they judged 90% and 50% PIs and in the control conditions they judged 90% or 50% PIs twice. In each case they estimated, the upper and lower bounds as well as a best estimate of the target quantity. The intervals were evaluated in terms of their hit, or coverage, rate and their width and location. The analysis of the first period PIs, confirmed the judges’ insensitivity to the target confidence (hit rates are 86% for 90% PIs and 82 for 50%PIs in the epistemic uncertainty and coverage rates are 76% and 73% in the aleatory uncertainty), so external uncertainty does not make judges more sensitive to the desired probability level. Results from the joint analysis of the two periods mimic the case where judges are asked to provide multiple fractiles from which one can infer simultaneously several PIs (Alpert & Raiffa, 1982). In the presence of aleatory uncertainty, judges differentiate between 90% and 50% PIs (the coverage rate of 90% is higher than the 50%) compared to the control conditions. In the epistemic uncertainty, we also find that the 90% PIs are wider and have higher hit rates than their 50% counterparts. Finally, we find that the (90% and 50%) PIs are wider under aleatory uncertainty than under epistemic. We recommend avoiding single-shot estimates and relying, instead, on multiple estimates to obtain better calibrated intervals and allow valid inferences regarding the judges’ calibration.