Subscribe Bookmark RSS Feed



Aug 19, 2014

DOE Choice Design Versus Forced Ranking: Preventing Judges From Whining While Wining

Don Lifke, Research and Development Engineer, Sandia National Laboratories

Claire Syroid, Pharmacist Clinician, Walgreens Specialty Pharmacy

We compare the DOE Choice Design feature in JMP to a forced ranking methodology for determining preference of items that typically only have nominal characteristics, such as taste. Rather than using data from weapons projects that can be sensitive and overly technical in nature, we instead utilized readily attainable data from a subject that can be understood by most – wine. The intent of the study was to compare the two judging methodologies, not to determine the best tasting wine. Nonetheless, the outcome provided a nice tip sheet for future wine purchases. The topic was inspired by a question about forced ranking from Doug Montgomery (author of Design and Analysis of Experiments) at last year’s Discovery Summit. Ranking (sorting in order of preference) a large set of items can be difficult. On the other hand, it is fairly simple to perform pairwise comparisons, repeatedly deciding which of two items is better. A panel of 18 seasoned wine enthusiasts was instructed to force rank 12 wines (ranking them from 1 to 12). They were also presented with a Choice Design, which asked them to compare wines in pairs; each judge had to only decide which of two wines tasted better in each of eight pairs presented to them. This particular experiment used Oregon Pinot Noir wines, which of course included Chehalem, the winery featured on the cover of Montgomery’s book. The results show surprisingly different outcomes when using DOE Choice Design versus forced ranking. It was also observed that judges clearly preferred completing the DOE Choice Design over the forced ranking. (There was significant whining while trying to complete the forced ranking.) If time permits, we will also explore data from a previous study using a basic 1-10 rating scale.