Identifying Differences in Preferences Due to Dimensionality in Stated Choice Experiments: a Cross Cultural Analysis

Identifying Differences in Preferences Due to Dimensionality in Stated Choice Experiments: a Cross Cultural Analysis


J Rose, D Hensher, University of Sydney, AU; S Caussade, J de D Ortúzar, Pontificia Universidad Catolica de Chile, CL; J Rong-Chang, National Chinan International University, TW


Three SP data sets were collected using the same survey from three different countries. Each data set uses 16 designs allowing for an analysis of whether different SP design dimensionality has differential influences across different cultures.


The generation of Stated Choice (SC) experiments has evolved to become an increasingly significant but complex component of SC studies. Typically, SC experiments present sampled respondents with a number of hypothetical scenarios consisting of a universal but finite number of alternatives that differ on a number of attribute dimensions and require that these respondents specify their preferred alternative. These responses are then pooled before being used to estimate parameter weights for each of the design attributes (or in some cases, even attribute levels).

The need for respondents to repeatedly process information on the attributes and attribute levels of alternatives within SC choice surveys has resulted in the continual calling into question of the ability of respondents to accurately undertake such tasks. Of concern is the cognitive load under which respondents are placed in answering SC choice surveys as well as the possibility of fatigue effects rendered through repeated questioning. Research efforts have tended to focus upon the impact various design characteristics have upon respondent?s ability to respond to choice tasks. Specific issues examining the impact upon behavioural responses have included the number of alternatives within the task, the number of attributes, the number of attributes and alternatives, the impact of attribute level range upon and the number of choice profiles shown to respondents. More recently, Hensher (2004, 2006, 2006a,) and Caussade et al. (2005) examined all of the above effects simultaneously.

Of particular concern is fact that research examining the impact of design dimensionality upon SC studies has often yielded contradictory evidence. For example, early research found that the first few choice situations of SC tasks are often used by respondents to adapt to the task and develop a decision strategy. In this vein, Meyer (1977) demonstrated that individuals? decision calculi stabilizes within three choice situations given a three-attribute choice task. A number of researchers have examined the impact that the number of profiles has on the behavioural responses of respondents completing SC tasks. In each instance, these researchers found no evidence that the number of choice tasks has more than a marginal impact upon the behavioural responses of respondents. Nevertheless, Bradley and Daly (1994) found contradictory evidence to the above in the form of increased unexplained variance as the number of choice task replications is increased. A similar finding was reported by Caussade et al. (2005), although they concluded that other design dimensions have a larger impact upon error variance. As such, the issue of the number of choice profiles respondents are capable of handling remains a contentious issue within the literature and there remains a perception that respondents can handle no more than small number of choice tasks (usually anywhere between three and 16).

One problem in forming a clear understanding as to the exact influence different design dimensions play in obtaining SC results is that different researchers often use different experiments, conducted at different times, in different cultural settings. This makes forming concrete conclusions difficult. Interestingly, Caussade et al. (2005) collected data in Chile using the same survey instrument developed by Hensher (2004a,b,c) in Australia. A third survey using the same survey instrument was conducted in Taiwan in 2006. These three data sets offer, if combined, a unique opportunity to examine whether design dimensionality has a similar impact upon results obtained from SC studies in three very different cultures (unfortunately temporal effects cannot be examined as the data were collected three years apart).

This paper examines combing the three data sets and testing whether the influences of design dimensionality are culturally biased, or constant across the two cultures. The research starts by examining simple questions such as do respondents from both countries devout the same amount of time to SC questions (data that was passively collected), before moving to a more advanced and scientifically rigorous examination of differences in the population moments of random parameter distributions as well as influences on the error components of models.


Association for European Transport