Subscribe Bookmark
john_sall

Staff

Joined:

May 27, 2014

Choice Experimental Designs Are Different

Laptop vendors need to know which features are valued in a laptop and how much customers are willing to pay for them. Manufacturers could learn this through a market research technique know as a choice experiment. This post covers the elements of experimental design for choice experiments using JMP 8.


But first, I've got to give credit where it's due. Both the R&D and examples used here are the work of Bradley Jones and Chris Gotwalt, who implemented the techniques in JMP 8.


So let's design a choice experiment to figure out how valuable a number of features are to customers. In particular, we focus on the following:


Factor Levels
Speed Fast, Slow
Disk Size Big, Little
Battery Life Long, Short
Price Cheap, Expensive


In the experiment, a subject has to choose between two configurations, selecting the configuration that is most appealing. For example, one choice might be between an expensive but full-featured laptop, and a cheap but feature-compromised one.


Choice Speed Disk Size Battery life Price
aFast BigLong Expensive
bSlow Small Short Cheap


Suppose we used an ordinary experimental design for use in a choice experiment. Each choice could be specified like a block in a traditional design. Would that be a good idea?


Consider the following choice between a and b:


Choice Speed Disk Size Battery life Price
a Fast Small Long Expensive
b Fast Small Long Expensive


These are runs in which the factors have the same values. There is no real choice here; there is no information to be gained because the choice is arbitrary. Thus, we have our first rule of choice experiments:


Guideline: In choice experiments, there are no within-block replicates, i.e., the alternatives tested have to be different in order to learn something.


This is quite different from, say, an industrial response surface design, where replicates are valuable in estimating factors with greater precision, in getting a more precise estimate of experimental error and in getting an estimate of pure error for a lack of fit test. So choice designs are different from experimental designs.


Now let's consider a situation in which one choice outweighs all others. This could easily happen if the factor levels are spread out much more in one factor than in another, relative to the situation. For example, suppose that the high price was so high that no one could ever choose it, despite any other factor values. Now consider running the typical classical design, in which all the factors vary within blocks. Suppose each subject is given three choice questions, each with two choices.


Set Choice Speed Disk Size Battery life Price
1 a Slow Big Long Cheap
1 b Fast Small Short Expensive
2 a Slow Small Long Expensive
2 b Fast Big Short Cheap
3 a Fast Small Short Expensive
3 b Slow Big Long Cheap



This design can tell us a lot about a dominant factor, like price. Can it show anything else? If price dominates the decision, the user doesn't even have to look at the other factor values to make a decision. The other factors are not even measurable, other than being smaller than the price effect. You sacrifice learning about other factors. To fix this, we need to keep some factors the same within a choice set for some of the trials.


Guideline: You must have choice sets where one factor is constant across the choice set, for each factor.


There is another reason that you shouldn't vary too many factors across a choice set: Subjects get too confused and fatigued when there are two many differences for them to evaluate the trade-offs. If the two choices are very different, the choice will tend to look like a choice between two very different things; we say it is like comparing apples and oranges -- they are each good or bad in their own way, and they don't really compare against each other.


Guideline: Never vary more than three or four factors at most across a choice set.


There is another problem with ordinary designs. Consider the following choice set for a laptop:


Choice Speed Disk Size Battery life Price
a Fast Large Long Cheap
b Slow Small Short Expensive


This choice is going to be easy for the subject. There are no trade-offs to make. The laptop experiment was built for trade-offs because all the factors have a naturally preferred level. Faster is always preferred to slower. Larger disk is always preferred to smaller. Longer battery life is always preferred to shorter. Cheaper price is always preferred to more expensive, other things being equal. This experiment has all polar factors. Thus, this choice set doesn't tell us anything we don't already know. This choice set is an insult to the subject. Yet traditional experimental designs will produce runs like this.


Guideline: Polar factors should always have a mixture of polarity. No choice set should have all the polar factors set in the same direction within choices.


There are other issues with some surveys:



  • Some surveys ask too many questions, aiming for "full profile" details that allow you to estimate a different model for each person. But people tire of surveys that are too long; they stop early at any survey that challenges their patience, unless you actually pay them. About 15 questions is the most we can expect from volunteer surveys.

  • Some surveys don't vary enough factors. At an extreme, suppose that only one factor is varied across any choice set. In that case, only main effects are estimable; no interactions can be estimated. You can't evaluate trade-offs very well because the relative trade-off across factors is never tested.

  • If you balance the need for estimating trade-offs and interactions with the need for not asking too many questions, you should realize that the survey needs to give different sets of questions to different people to accomplish both aims. Though this makes the application of the survey more complex, this is not a real problem because the surveys are computer-generated.


  • All these rules sound pretty obvious in retrospect, right? The irony is that these considerations are not usually followed, especially the last few. Often market researchers use the same design of experiments (DOE) software for choice experiments that they use for industrial experiments, thinking that DOE is an abstract and general concept that is the same in every situation. It is not the same. Choice experiments are harder to design well.


    The one good approach to the very specific needs of each choice experiment is to use the tools of optimal experimental design, but adapted to the specific needs of choice experiments and to the specific needs in each individual situation.


    The technique of optimal design in general arranges factor settings in runs so that the most is learned from a given number of runs. In learn models, the optimal arrangement is invariant to what the actual parameter values are, so the situation is straightforward.


    It turns out that choice models, which are fit with a specialized kind of logistic regression, are not linear in the parameters, so the optimal design depends on the true value of the parameters, which is unknown. So some range, or prior distribution, of the parameters is used to represent the the range you need to consider. The optimization of the design for this is fairly difficult, involving integrating out prior densities to create the optimal design. [Chaloner, K. and Verdinelli, I. (1995). Bayesian experimental design: a re-view, Statistical Science 10: 273-304.] But JMP was able to take what it had learned for Nonlinear DOE and apply it to Choice designs [Gotwalt, C., Jones, B. and Steinberg, D. (2009) Fast Computation of Designs Robust to Parameter Uncertainty for Nonlinear Settings accepted at Technometrics. ]


    So a good experimental design for a choice experiment is different from that for other experiments, and optimal design techniques can handle them best.


    In my next post, we'll see how the laptop experiment was actually designed and run.

    2 Comments
    Community Member

    Ryan Kelly wrote:

    Hello John,

    I am trying to find out why JMP can't readily/easily handle a simple Monte Carlo simulation where the variables are correlated, such that the known correlation is input into the model such that the MC simulation will sample from the joint distribution. Why can't this be readily done via the menu in JMP? Is there an existing JSL script to do this? This seems to be quite a shortcoming in JMP.

    Thanks,

    Ryan- kellyeng@cox.net

    Community Member

    Optimal Design of the Choice Experiment - JMP Blog wrote:

    [...] previous blog post covered issues in the design of a choice experiment for laptop computers. The goal was to model the [...]

    Article Tags