Schedule - Parallel Session 2 - Noise and ImprecisionIMC Auditorium - 14:00 - 15:30
Imprecise Preferences: A Model and a Measurement
Imprecision in preferences has been used to explain a borad rang of anomalies. Surprisingly, there are no measures with proper material incentives and formal models with a rigorous structure are rare. In this paper I propose a model of imprecise preferences with an axiomatic foundation. Imprecise preferences are captured by individuals having not a single but a set of utility functions. Individuals perform standard expected utility calculations given any specific utility function and take subjective expectation a concave transformation of the standard expected utilities as to the set of utility functions. Based on the model, an incentive compatible mechanism to measure imprecision in preferences is developed. It can be shown that two empirical puzzles – the willingness-to-pay and willingness-to-ask (WTP-WTA) gap as well as the present bias – are natural consequences of imprecise preferences.
Intrinsic and Extraneous Noise in Risky Choice Experiments
Graham Loomes and Lukasz Walasek
Participants’ responses in decision experiments are ‘noisy’: when presented with exactly the same choice at different moments within the same experiment, many people are liable to answer differently from one moment to another. Some of this may be due to intrinsic variability in the way people generate their decisions; but the experimental environment may also have an impact – e.g. the complexity of the task, the workload, the (lack of) incentives. Moreover, in principle, extraneous and intrinsic factors may interact, and may operate to different degrees for different individuals, making it harder to identify core preferences. Can we identify/separate/measure such effects? We present some results which may shed light on these issues.
True and Error Models of Response Variation
Michael H. Birnbaum
True and Error Model of Response Variability Theories of decision-making are based on axioms or imply theorems that can be tested empirically. For example, expected utility (EU) theory assumes transitive preferences; if A is preferred to B and B is preferred to C, then A should be preferred to C. However, human behavior shows variability; the same person may make different choice responses when presented the same choice problem. A person may choose A over B on one trial, and B over A on another trial. How do we decide whether a given rate of violation exceeds what would be expected by the inherent variability of response? A standard approach has been to test stochastic properties such as weak stochastic transitivity or the triangle inequality on the binary choice proportions. It will be shown that this method can lead to systematically wrong conclusions regarding transitivity when the data contain random error. Similarly, EU theory also implies independence properties whose violations are known as Allais paradoxes. For example, EU implies R = ($98, .1; $2, .9) is preferred to S = ($47, .2; $2, .8) if and only if R’ = ($98, .9; $2, .1) is preferred to S’ = ($98, .8; $48, .2). But some people switch. Are these reversals due to variability of responses or are they “real”? A standard approach has been to compare the number of cases that switch from R to S’ to the number who switch in the opposite direction using the test of correlated proportions. It will be shown that this statistical test is not diagnostic and easily leads to wrong conclusions. This talk will present a family of true and error (TE) models that can be applied to individual or group data. The TE models require the experimenter to present the same choice problems at least twice to each person in each session. Variability of responses by the same person to the same choice problem in the same session is used to estimate the error variability. The TE models are testable models, and provide special cases representing the properties to be tested. This means that there are at least two statistical tests in any given application: First, one tests the TE model; second, one tests the special case (assuming the formal property such as transitivity or branch independence). TE models are more general than the “tremble” model. They also include the transitive, Thurstone and Luce models as well as certain random preference models as special cases. They are generic, like the Analysis of Variance. Whereas in ANOVA, there is a breakdown of total variance into components representing main effects, interactions, and errors, in TE, response variability is decomposed into probabilities of true response patterns and error rates. This talk will present both hypothetical and real data to illustrate how TE analysis works and to illustrate how it can lead to different theoretical conclusions from commonly applied methods that make unnecessary assumptions.
Random Utility Without Regularity
Michel Regenwetter; Johannes Mueller-Trede
Classical random utility models (Falmagne,1978; Barbera and Pattanaik, 1985) imply a consistency property called regularity. Decision makers who satisfy regularity are more likely to choose an option x from a set X of available options than from any larger set Y that contains X. In light of ample empirical evidence for context-dependent choice (e.g., Huber, Payne and Puto, 1982) that violates regularity, some researchers have questioned the descriptive validity of all random utility models. We show that not all random utility models imply regularity. We propose a general framework for random utility models that accommodate context dependence and may violate regularity. Our framework’s mathematical foundations lie in polyhedral combinatorics. The virtues of a geometric perspective on decision-theoretic models have long been recognized (e.g., Iverson and Falmagne, 1985). Only recently has the geometry of decision theory garnered increased attention, however, as mathematical advances have widened the scope for its applications (e.g., Cavagnaro and Davis-Stober, 2014; Fiorini, 2004; McCausland, Marley, 2013, 2014; Myung, Karabatsos and Iverson, 2005). Our treatment shows that, viewed through the lens of polyhedral combinatorics, classical and context-dependent random utility models are virtually indistinguishable: Both are characterized by convex polytopes. Their descriptive performance in empirical settings may thus be assessed and compared using methods of order-constrained inference (Davis-Stober, 2009; Klugkist and Hoijtink, 2007).