Choose Language Hide Translation Bar
smcst53
Level I

Beyond A/B: Case Study of Multivariate Test Design and Advanced Analytics for Webpage Optimization (2021-US-45MP-821)

Level: Intermediate

 

Steven Crist, Analytics Consultant, Wells Fargo

 

It is well known that optimization of the layout and content of webpages can be achieved through thoughtful pre-test design of experiment (DOE), post-test analysis and identification and productionization of a winning variant webpage.  The present use case demonstrates the use of the JMP custom DOE platform to create a fractional factorial multivariate DOE for a financial services checking account webpage that effectively managed business constraints while providing the necessary data that lead to a 7% increase in application volume as compared to the legacy webpage.  Additionally, leveraging the JMP partition model platform, an additional key insight was discovered that visitors who clicked on the ‘compare accounts’ link were 40% more likely to submit an application.  The ‘compare accounts’ insight was not the main inquiry of the original test but provided guidance for future testing to further optimize the webpage and resulted in an additional 4% lift.  The presented use case demonstrates the effectiveness of the testing continuum of a test leading to actionable insights resulting in the next optimization test and so on.

 

 

Auto-generated transcript...

 


Speaker

Transcript

Bill Worley hi.
Steve C I'm Steve Crist. Thanks for joining me today as we go beyond A/B testing and look at a use case of multivariate testing and advanced analytics for web page optimization.
  The outline for today. We'll start with a high level overview of the entire use case then dive into the details. We'll take a look at a JMP demo
  to look at how we evaluate a specific design of experiment.
  We'll also look at the results we got from the test and how we were able to leverage the JMP partition model to enhance those insights.
  From a high level overview, this use case consists of two tests. In test one, we looked at some components of the banner design and the body of the page layout.
  And we had successful multivariate test that resulted in a 7% lift in applications. We were able to take that one step further
  to uncover an insight for some content that had gotten pushed down at the bottom of the page that we found out was very important and we're able to leverage that in the test two. We're able to
  increase our performance on top of the first test winner by an additional 4%. So the use case highlights how the JMP DOE platform and its custom DOE was able to help us enable test one, and then the JMP partition models helped us extract out more
  additional insight we were able to leverage into test two.
  So let's get into the details.
  When we look at our current page, this is for the Wells Fargo checking homepage.
  And our marketing partners and business partners have done a great job of getting some voice of customer feedback, so the main motivation for this test was around this Body Style A or Page Layout A
  versus B, where our customers had said that they wanted to see more products surface higher up in the experience. And our marketing and business partners said,
  while we're...while we're making this change, we also think that this image that we've had for a while, we could do better.
  We also think that this banner design, where we bring the content to the left, where people's eyes are more naturally drawn and physically smaller so that we can surface this content that our customers and visitors said they're more interested in. So our partners came to us with an A/B test.
  But you can quickly see that there's a lot more going on here. We only have two of the eight possible combinations
  covered and what the conversation we had with our partners was that this design looked great, but we have some risk.
  And that risk is that some of these components may work well and some of them may detract from performance and cancel each other out. And so, if we've only run an A/B test,
  we won't know what components do and don't work well. And there's there...that's where we proposed a multivariate test, which our partner said, that sounds great. Let's do that. But...and they...we had some business constraints that we needed to manage.
  So what were those business constraints?
  They were,
  as I mentioned, the main motivation was around this body layout and and bringing the new content to the page.
  And so, when we look at the overall design space, our partners had no interest and no appetite, really, to test any of these other variants that would have the old body page layout. That was
  that was a key...key factor number one.
  Secondly, there wasn't much appetite...in this block, this represents a page that had the the old image with the new layout, and again there wasn't much interest
  in doing that. And so, in this particular case, our partners were very prescriptive about the tests that they were comfortable running.
  And so the natural question is is, does this test work as a holistic multivariate test design? And that's where the JMP custom DOE platform comes in. So let me switch over and we'll take a look at how we did that.
  So we're going to go to DOE, and we looked at a custom design.
  We have three...three factors. We have our image
  with an A and a B versions, we have another two level categorical variable of banner with an A and a B,
  and lastly, we have our body page layout with an A and a B version.
  And
  when we click continue,
  we're looking good so far. And what we notice here is that we can cover this four cell design space in as little as four cells, which works out well
  for us in this situation, but we know from experience, and people who are familiar with this would know, that the
  JMP optimization engine will never give you this design as a four cell fractional factorial. This is not the most optimal design
  for four cells, but it was the design that we are trying to work with to manage our our business constraints. And so, in order to force this particular design, you can use the disallowed combination
  to essentially specify every single cell that we don't want to run, so that we're left with what we do want to run. Within JMP, there...there isn't a way to specify exact test design, but you can leverage the disallowed combinations script to be able to force it.
  And this is something that that we do very often,
  because in our...in our world doing web page testing, we get a lot of input from our partners. And this is a fairly typical use case for us, and so this is a technique that we use often.
  And so, because we're dealing with categorical variables, just to explain the script here, we have to code them, so value of A, gets a 1 and value B gets a 2, so when we look at this particular cell that we don't want to test...
  Let me bring that back up.
  Wh look at this particular cell that we don't want to run, this is image A with banner B and body style A, so ABA or a 121.
  And this cell here is image B with banner A and body style A, or 2211. And so in that manner, you can specify all four of these cells that we don't want to run. So then we click make design.
  Takes just a second to render.
  The fact that we don't immediately get an error saying, does not converge, is a good sign, and so, in fact, we we...
  JMP says yes, this design will work. And so here's our...here's our control page, represented in run number one here, that is an AAA,image A with banner A and body style A. Here in run three is our
  variant that our partners came to us with in the original A/B proposal that has the image B with banner B and body B.
  And then also we have variants one and two represented here as well, with BAB and AAB, and so we get back what we expected to get back. So I'll comment here for just a moment about the evaluation. One of the things that I typically look at is the color map on correlations
  and the design diagnostics.
  And for those who are familiar with these, you'll notice that
  this is, as I mentioned, not the most optimal design, but it does work and that's where the science and art of this
  comes together, that we have to make it work with our partners. And so to the extent that it does work, that was the most important factor for us.
  Also, knowing that that we have a fallback, that we have the actual data. We could do some piece wise comparisons, if we want to, but there's a lot of efficiency gain
  by running a multivariate that we can infer some of these results. I'll make a comment here too, not the main point of this conversation today,
  but a well constructed and properly analyzed multivariate does not take any longer than an A/B test, which is very counterintuitive at face value. So I wanted to make that statement up front. If you're interested, please ask me during the Q&A or reach out to me offline.
  So, as we come back over to the presentation.
  So we just looked at this design and we were able to evaluate it, we now know that that this works as a holistic fractional factorial multivariate. So what did we learn?
  We learned, as we looked in the overview, that
  this variant two, we had a 7% lift. That was our highest performing one in this design space, and because we were...had a holistic multivariate test, we're also able to leverage some regression techniques to be able to infer, to do a look back analysis of
  what would have happened if we had run some of these pages during this test. So not only did we have the actual values that we could
  validate and verify with regression analysis and we had good agreement, but we also...because of that, can also then calculate and look back on how these pages would have performed if we had tested them. And that...that's one of the
  powerful things about multivariate testing, especially if you're in a situation like we are, where you need to develop and create all these different pages. We can leverage this technique
  to save ourselves quite a bit of work up front, yet still be able to extract the learnings. It also helps isolate
  what's driving the performance. And you can see here that if we had done the original A/B test,
  we would have...we would have had...
  our variant would have suppressed performance. And if we hadn't run a multivariate, the interpretation would have been that this new page that our customers...
  or this new concept and design that our customers said that they would like, we would have interpreted that they didn't really like it. But in actuality, you can see that
  it's all of the banner B versions that suppressed performance. That was...that new banner style didn't particularly resonate well with our visitors
  and our customers.
  And so we were able to extract them. Not only did we have a better winner going forward, but we are able to easily understand that this design
  that our customers said they wanted, is, in fact, when it gets down to it, what...what performs well.
  So we could have just as easily stoppped there, and said, great, we had a successful test. Thank you and we're done. But...but are we done? We have a lot of data. Is there anything else that the data can tell us?
  And in our reporting, one of the things that we noticed was that we got a lot less engagement and a lot less in clicks
  on this compare accounts and the find the right account, which takes you to a guided experience. And in the control experience, those...
  that content was much, much higher up. It was right under the banner, and so, not surprisingly at all from it click perspective,
  did we see a lot less clicks on here. That's pretty typical of any web page, that the further down on the page, the more people have to scroll, the less they click on it. That's that's pretty intuitive, but we ask ourselves the question because it was such a stark contrast.
  Is that a problem? We think these experiences and content is pretty good, and we're kind of surprised that it dropped that much. We expected it to drop a little bit,
  but it dropped a lot more than than we were expecting. Is that a problem? And so to help us understand that aspect of it, we used the JMP partition model, and in particular, the decision tree algorithm.
  So we took variant two and we analyzed for application submissions to try to figure out what content, when it was or wasn't clicked,
  helped increase. And we saw separation with a higher application submission rate.
  And so at the top of the tree, the most important thing, again very intuitive, not at all surprising, is that people who clicked
  on some content on the page versus people who visited but didn't click on anything, the people who clicked had a 20 times...20 X higher app submit rate.
  Very straightforward. They're motivated. They're engaging with your page. So they clicked, but what did they click on?
  The next node of the tree is the apply now and open now content, so people who clicked open now or apply
  directly from this page versus people who clicked on something else but didn't click apply, those apply clickers had a seven X higher app submit rate. Again, very straightforward, very intuitive.
  But what we were surprised about was if they didn't click apply now, there are a lot of other paths on this page that you can
  click on the product details page and apply from there. Or you can go into the compare experience, for example, and apply from there. Like you can go deeper in the experience and still apply. And so the second most important content on this page, other than open now, is the compare all accounts,
  (CTA). And we just buried it down at the bottom of the page. And in fact, the compare all accounts is just as important as the content in the banner, which was very surprising to us.
  So we're able to have that conversation with our partners to then leverage this insight that when people click compare, they're 40% more likely to submit an application. We need to test to see if we can put that back up higher in the experience and what that does for us in terms of performance.
  And what we found, you know, so here in control, we had the compare and the product selected content, both at the bottom.
  And my personal proposal was just put them back up both at the top, but luckily our...we'd educated our marketing and business partners enough that they said, well, let's run a multivariate to find out, do we need them both at the top? You know, is there a preference,
  in terms of from our visitors, in terms of performance and site conversion? Should we just have one? Is...is some of this contact...content distracting?
  We found out that it was. You know, in our best performing one, as we mentioned that the top, was this variant that had the compare at the top
  and the selector down at the bottom. You know, the selector, in particular, less people were more
  interested in that. And so, by having it at the top, it was a bit distracting. And so we're able to really clean up and optimize our experience by keeping just the compare back up at the top.
  So, going back to the to the high level here, you know, this is...we had two tests and we were able to use the JMP platform in the first test
  to help us navigate the multivariate testing discussion.
  And then use the JMP partition model to uncover that insight and feed that insight forward. And so, again, between those two tests in a fairly short period of time, we went from...we're able to increase our site applications by by 11% all total.
  And so, in conclusion, the the multivariate testing,
  in general, is a very effective method be able to isolate and understand what's going on with your test. You know, as most people know,
  once you start changing too many things, you lose some level of insight there.
  And I'll also say that, you know, in financial services testing, one of the key differences here is that
  people may...may be asking themselves, why didn't you just do a bunch of sequential A/B tests? And part of that relies in being in financial services,
  and I think other industries have this as well, that every change to a page needs to go through a level of legal, risk, and compliance review. And so
  for us, the multivariate method is very effective, because we can go through that process once, and sequential A/B testing ends up being very cumbersome, and we can
  move through testing much more quickly with with this method. And as it relates to JMP the DOE platform, JMP's DOE platform is really best in class.
  It's so visual and it's so impactful to help navigate those conversations with your partners, because,
  you know, as an analyst, this is something I do on a daily basis, but it's not something that our partners, marketing and business partners, really think about very often.
  And so the JMP platform is instrumental and help us navigate that conversation and articulate why this is the best path forward.
  And again, the flexibility of the disallowed combinations script gives you the ability to evaluate specific designs and navigate and manage those business constraints.
  And then taking all of that one step further, the partition model and the decision tree analysis helps extract even more insight, so that
  one of the things we always strive for is to be on this continuum, where we run a test, we hopefully have some impactful results and learn something.
  But that we also learn something that we didn't really appreciate and really, we're looking for to get that insight to know what to test next.
  So that concludes the presentation. I want to say thank you to all my colleagues, testing colleagues, at Wells Fargo for all their leadership and collaboration
  spanning the the the the range of project management to web page development to our quality assurance team members. So thank you all, and a special thanks to Rudy for his
  friendship and being a longtime colleague for his guidance, expertise, and tenacity, and always helping the team strive to do better. Thank you for attending today.
Comments

Very interesting use cases of various Consumer study tools available in JMP

smcst53

Thank you to all those to who attended and to Bill and Funding for hosting!  Please leave questions or comments here or please also reach out via LinkedIn or email me at Steven.Crist@wellsfargo.com.

 

I’m loving it  

Article Tags
Contributors