Q&A with market research expert Walter R. Paczkowski
Last month, we featured consumer and market research expert and Founder of Data Analytics Corp. Walter R. Paczkowski on our Analytically Speaking webcast series. If you missed the live webcast, you can still view it on demand. Host Anne Milley took many audience questions, but was unable to get to all of them. So Walter graciously agreed to answer some of them in this Q&A.
Questions: (a) What is your approach to building hypotheses to be tested ahead of an analytics project? (b) Do you find that analytical work for B2B segments are much harder than B2C segments because there can be so many factors in B2B that cannot be put into a model?
Answer: The responses to these first two questions are similar, so I'll answer them together. This is where the upfront qualitative work becomes an important part of the overall research design. Remember, there are two phases I advocate for most projects: qualitative followed by quantitative. The qualitative phase helps set the parameters for the quantitative phase. We generally don’t know all the parameters needed for the quantitative phase – key factors or attributes, levels for the factors, correct wording, important concepts, just to mention a few. The qualitative research, focus groups or one-on-one in-depth interviews with subject matter experts (i.e., SMEs) or key opinion leaders (i.e., KOLs), helps identify them. This makes the quantitative phase more focused and powerful.
What does this have to do with hypotheses and B2B factors? Hypotheses are just parameters, no different from a list of factors or attributes to include in the quantitative phase. Discussions with consumers or SMEs or KOLs can help formulate hypotheses that marketing personnel may never have imagined.
The same holds for B2B modeling – or, in fact, for any modeling for B2B, as well as B2C or B2B2C. If the list of factors is large, then seek help from SMEs and KOLs. They’ll help tell you what is important and what can be ignored. But this is just upfront qualitative research.
Question: Do you think organizations have a balanced approach to creating value from both the found data and the more information-rich data to be gained from well-designed surveys and experiments?
Answer: I’m not sure about “balanced,” but the use of both types of data is definitely there. Since I do work across a wide range of industries, I see many practices, the best and worst, which I talked about with Anne. Many of the large organizations, the sophisticated ones I mentioned in the interview, use these two sources of data to answer their key business questions and understand their markets. These are the ones who follow the best practice of using the right tools – the tool being the type of data in this case.
Over the past few years, I’ve presented workshops on choice modeling, a great example of an experimental approach, and working with Big Data, as I mentioned in the interview. Not only have they been well-attended, but I noticed that many attendees were from the same company, different divisions but nonetheless the same company. So the use of both types is there – I have the data!
Question: When pricing is a factor in a choice experiment, how well does the optimal price indicated by the experiment correspond to what the actual best price should be in the field?
Answer: This is a great question. And hard to answer. First, the last part of the question asked about “what the actual price should be in the field.” This is the whole purpose of the study – to find that price. I think what the question is really asking is whether or not the study replicates current existing prices. That can best be determined using the Profiler in JMP or a simulator by setting the base case to the current actual conditions. But these conditions won’t match exactly what’s in the market since market prices are driven by many other factors that the study can’t handle. Nonetheless, the study should come close. So look for the Profiler to help on this issue.