Conjoint Studies are renowned for yielding a great deal of strategic insight. In a single conjoint study, one can address price optimized relative to profit, revenues, unit sales or market share, optimal product feature set, cannibalization patterns, market response to competitor actions or new product introductions, dollar value of brand equity and many other issues.
However, conjoint studies capable of providing such rich findings tend to be somewhat lengthy and often time-consuming to field. A typical choice-based conjoint study, for example, will have 12-20 choice tasks in addition to any other questions included in the survey. If the number of attributes is large, even more choice tasks may be desired. Often, conjoint exercises can be confusing and fatiguing experiences for respondents.
The development of Hierarchical Bayes techniques in the late 90s not only has allowed the estimation of individual level utilities for choice-based conjoint but also for the more accurate individual level utility estimation of ratings-based conjoint. What has been generally ignored by the commercial research community is the fact that the efficiency of HB also allows for the reduction of the number of choice tasks required to support individual level utility estimation. Current practice is to design choice-based conjoint studies as if HB did not exist and then to apply HB to the resulting data. This is safe but inefficient.
The introduction of web-based surveys into common practice has given the practitioner a relatively inexpensive method for creating large sample studies. The combination of large samples with the efficiency of HB may create an opportunity to dramatically reduce the number of choice tasks per respondent necessary for estimating acceptably accurate disaggregate models.
There are several advantages to reducing the number of choice tasks/conjoint ratings shown per respondent:
There are also several potential disadvantages:
Potential applications for the abbreviated task set approach could include: Any study benefiting from a very brief interview but capable of generating a large sample, e.g., Trade Show and Conference floor intercepts, Web surveys or Telephone surveys, Studies combining conjoint with other issues such as segmentation, brand positioning or attitude and usage, resulting in an excessively long interview, Realistic environment studies, e.g., Laboratory simulations or Control store tests.
The purpose of this paper is to, in an empirical way, assess the net effect of reduced task set to model error, rather than address each potential factor separately.
To continue reading, you may download a full version of this article.