This presentation demonstrates how JMP was instrumental in creating significant improvements within a call center that was facing challenges in understanding customer issues and optimizing service delivery. By using JMP's intuitive tools, including variability analysis, ANOVA, and Graph Builder, the team was able to create call categories and subcategories to bring clarity to the customer experience.

Through straightforward text exploration and descriptive analysis, we uncovered key patterns in customer inquiries without resorting to complex statistical modeling. The focus was on identifying and targeting the largest sources of variability to implement changes in agent training and knowledge resources.

This presentation gives attendees practical insights into applying JMP for process optimization in customer service, demonstrating how to effectively visualize and communicate data-driven findings to improve key performance indicators. We also share practical tips for using JMP's text analytics capabilities to glean insights from customer interactions. 

 

 

Hi, everyone. My name is Sara Handa. I am a Senior Manager of Process Improvement at Asurion, and I'm going to be talking to you today about how I used JMP to go from call center complexity to data-driven clarity.

We've all probably called into a call center before. Sometimes it's smooth. You get your issue resolved quickly, and you're on your way. Other times, not so much. You wait, repeat yourself, get transferred, and wonder why it's taking so long. At Asurion, our job is to make sure every customer has a good experience. But we also have to resolve issues efficiently.

That balance is tough because every customer issue is different. Some calls are simple, others are complex. We asked ourselves, what's causing that complexity? Why do some calls take 2 minutes and others an hour and 20? The question launched our journey into the data.

Today, I'll walk you through how we used JMP to go from messy, unpredictable call center data to clarity, and how you can apply the same thinking to your own operations. Let me pull up a very quick presentation that will just orient everybody here to what call centers are and what we are going to cover in the presentation today.

Here's our roadmap for our time together today. We'll start inside the call center, dig into the data, uncover what's driving variability, and then walk through how we turned those insights into real operational impact. Let's begin with the call center itself, because it's more than just a place where phones ring. It's the front line of customer experience.

At Asurion, our call centers handle millions of customer interactions every year. From activating new devices to troubleshooting complex tech issues. Each call is a moment of truth. It's where we either solve the problem quickly and build trust, or we miss the mark and risk losing a customer.

But here's the challenge. No two calls are the same. Some are straightforward. Others are layered with technical issues, emotional frustration, or unclear expectations. That variability makes call centers incredibly complex to manage. That's where data comes in. Even small improvements in how we handle calls can lead to big gains in efficiency, better workforce planning, and stronger customer outcomes.

We asked, how do we measure success in this environment? Our key metric is called, Call Resolution Time, or CRT, as it will be referenced in this talk today. It measures how long it takes to solve the customer's issue, but it's not just about speed. It's about solving the problem right the first time. When CRT varies wildly, it's a signal that something deeper is going on.

This is where JMP became essential. We used it to dig into the data, understand what drives CRT variability, and uncover opportunities to improve both customer experience and operational performance. Before we dive into the analysis, I want to just give you a very simple example of a call we might receive.

Imagine trying to activate your new phone after an upgrade. I'm sure many of us have a story about something going wrong and spending a full day trying to get it working. That scenario plays out every day in our call center. It's a good reminder that even simple calls can become complex. That's why understanding our CRT variable is so important.

If we look at this call example, our customer is calling in because they got a new phone. They're trying to activate it, but they're running into an issue where it's telling them the SIM is not detected. Our call center expert is checking the SIM card placement, asking the customer to remove it and reinsert it.

The first issue that's going wrong with this call, the customer says they've done that, but they're still receiving the same error. No problem. Our call center expert then runs a quick compatibility check, tries to understand the model of the device, which is confirmed by the customer, and then they push the resolution to that error. Pushing a network refresh, which ended up resolving the issue. Our customer was happy the device was activated, and both expert and customer can go on with their day.

This is a very straightforward call. This would probably take a short amount of time, but our calls range again from 2 minutes to an hour and a half. Things can go wrong. We can run into one issue on a call, we can run into 10 issues on a call. Again, introducing a lot of variability into our ecosystem.

In this talk today, like I said, we are going to focus on CRT variability, but we know there are other success metrics that affect call center operations. Namely, sales, customer experience, employee experience, and resolution rate. CRT variability is not the only factor we analyzed when we ran this analysis at our company. But to keep things efficient today, that's what we'll be zeroing in on.

Enough PowerPoint presentation, I think for now. Let's jump into JMP. The first thing I want to share is that Asurion is not a statistical analysis company, and I am by no means a statistician. We don't use JMP to drive a statistical process. We really use it as a visual tool to understand what's going on in our call centers and to be more targeted with our improvements. The first step in any analysis I run is to simply look at the data.

First for me, I run a distribution. Here we're looking at call resolution time and asking what's the distribution? Let me show you quickly how I set up this data. Again, just opened the distribution platform, pulled CRT into the Y column and ran the distribution. This is the output. It gives us a histogram, a box plot, and then summary statistics over to the right.

As you can see, the histogram has a very long right tail. That means some calls are taking much longer than most, which suggests the process isn't in control. We can also confirm that by looking at the median at 1,161 and the mean at 1,234. The median being almost 100 seconds less than the mean, again points to a right skew in the data set.

Another key metric is the standard deviation. It's over 600 seconds, almost half the mean, which signals a very widespread in performance, which again you can see in the histogram, you can see in the box plot, and you can confirm with the standard deviation. From this distribution, what do we know? We've confirmed CRT is not consistent and that there's large variation. From here the key question becomes what's driving this variation?

To dig into that, I use the Fit model platform. Let me again show you how I set up this data. Within the Fit model platform, CRT is our dependent variable. Then we selected three independent variables: call category, tenure, and site to understand the statistical impact of each of these three independent variables on CRT. We clicked Run here, and this is the output of that Fit model.

The effect summary at the top shows the log worth values. Here in the middle and the effect summary, the log worth values are a transformed measure of statistical significance. The higher the log worth value, the stronger evidence against the null hypothesis.

In this case, the effect summary shows that call category has the strongest influence with 48, followed by tenure and then site. From this model we can now say the type of call or the call category matters the most to CRT variability in these three independent variables. Knowing that, we now want to go deeper and ask, is there a specific category that's driving variability?

To answer that question, we use the ANOVA platform. This measures the differences in means between categories. Just to orient us on how we ran this, we pulled CRT into the response variable and call category as the X factor, and then clicked OK. Again, this is the output of that ANOVA model.

Looking at this graph at the top, we can see clear differences between call category in terms of both mean and standard deviation, which tells us that not all call types behave the same. We can confirm that when we look at the means and standard deviation graph or chart here at the bottom. We can see the mean for each call category and the significant difference shown here. We can also see the large variation in standard deviation represented by the spread in the data points up in the graph.

Device activation stands out. It has the most, sorry, it has the highest average CRT and the widest spread. The calls are not only longer, but they're more inconsistent when we compare them to the rest of our call categories. We've identified device activation as a subcategory to potentially target. This is where most of the long and variable CRTs are happening. But to confirm this, we ran a Tukey HSD test.

To do that, you would select the red drop-down arrow, come to compare means, and check all pairs Tukey HSD. When I ran that, this portion of the analysis appears, so the means comparison with a couple of different components. This test compares all possible pairs of group means to determine if they are significantly different from each other.

Looking at the HSD threshold matrix, the positive numbers are statistically significant. If we look at this matrix and the comparison between subcategories, we see Device Activation across the board is the only call category where all numbers are positive, meaning they are statistically significantly different from other call categories. This confirms it's not just slightly higher, it's consistently and significantly different.

With this knowledge, now we ask is this a recent issue or has it been consistent over time? To answer that, we turned to a control chart. To set up this data, we pulled CRT into our Y variable, date into the subgroup, and call category into the phase. The result of that is our control chart. Each point on this chart represents CRT performance for calls in that category. As you can see across the top for that date. As you can see across the bottom. The green line represents the mean for that category, and the red lines are the control limits. You can see the same data in this summary over to the side.

Notice how Device Activation's points are spread much wider than other categories. This is both in the mean and the range. This tells us the problem isn't a one-time spike. The high variability is consistently inconsistent over time. If we want to improve CRT, this is where we should focus first.

Even targeting just Device Activation call category, it is still a large number of calls, and there is still a lot of variability within that subcategory. We knew we needed more precision to know what improvement to target to drive value.

To do this, we ran a gauge variability chart. This helps us understand not just average performance within device activation, but the spread and consistency of CRT outcomes across site and tenure are two remaining independent variables. Let me show you what this looks like to set up.

Within the gauge variability model. CRT is our Y response variable, and we grouped our X values, our tenure, and site. Again, this is the output, and what we found was interesting. If we look at the variability summary here, it shows the mean and the standard deviation for all combinations of independent variables, as well as individually their output.

What we found was that new hires have a higher CRT than tenured experts. That's to be expected. They are new. They don't have as much knowledge. They haven't practiced this process as much. We were expecting their mean to be higher.

What was interesting to us if we look again is that their standard deviation is lower than our tenured population. This suggests that new hires take longer on average, but their performance is more consistent. At this point, the key question shifted for us; it was no longer something we could answer purely through data. The variability gauge gave us valuable insight, but it raised deeper operational questions.

To name a few. What's happening on the ground? How are tenured experts trained? What informal knowledge do they rely on? What does the actual activation process look like in practice? To answer those, we needed to move beyond JMP and into the field. We held a Kaizen event.

This event brought together frontline experts, process owners, and cross-functional leaders to map out the actual activation experience. What we found was eye-opening. As you can see in this picture here at the top of the page, we were mapping out the current state activation process with sticky notes and noting any pain points, any deviations from that process. As you can see, there is a lot there on the page.

What we found was that experts were executing different activation processes on nearly every call. Tool usage was inconsistent. Many relied heavily on tribal knowledge rather than standardized workflows. Even more striking, we found experts' understanding of the activation process didn't match customer expectations, and that misalignment was a key source of friction.

Through the Kaizen, we uncovered a clear opportunity. If we were able to provide a unified vision for activation, we could impact multiple KPIs, customer experience, resolution time, sales, CRT variability. This was a huge turning point where we moved from analysis to action.

Coming out of the Kaizen, we had clarity on the opportunity. We needed to build a guided flow for experts to allow them to walk through a consistent, optimal path for device activation. We built a guided flow based on the success criteria uncovered through our JMP analysis and through our frontline agents insights.

The flow acts like a decision tree. It's streamlined, it's purposeful, it leverages existing customer data, prompts the right questions early to set expectations, and includes error handling when standard steps don't work. The goal was to reduce variability, align the experience and help every expert deliver a successful activation every time.

At this point I want to remind us where we started with the device activation distribution. Let me go back and open our original distribution where we started our conversation together. Remember, we had a long right skew on our median was around 1,161 which was lower than our mean and our standard deviation was around 600 seconds.

Let's now use a local data filter to filter call category specifically to device activation and see how our original performance looked. Here you can see this graph change pretty significantly. There's no clear shape, median is lower still than our mean, so sitting at 1,570 versus 1,620 again indicating that the data is right skewed and our standard deviation jumped for device activation to 1,000 seconds, indicating a highly inconsistent process.

After we implemented the guided flow, let me show you what our standard deviation shifted to. As you can see, the mean dropped very close to our median, indicating a much more standardized process post-change. We also saw our standard deviation change from close to 1,000 seconds to 532, a huge shift in performance, creating much less CRT variability and much more standardization within our process.

With these results, what did it lead to? Lower CRT meant we could handle more calls without increasing staffing. It's a direct efficiency gain. Just as important, the guided flow improved the customer experience. There was less frustration, more satisfaction, and stronger loyalty.

Finally, by encouraging tool usage over memory-based troubleshooting, we saw real improvement in consistency. The CRT distribution reflects that is a tighter spread, fewer outliers, and a more predictable experience for both customers and experts.

To wrap up, what started as a question about call complexity led us through a journey of data exploration, operational insight, and real-world change. By combining JMP analysis with frontline collaboration, we didn't just reduce CRT; we improved consistency, customer experience, and team alignment.

This approach is something you can take back to your own organization. Start with the data, but don't stop there. Let it guide you to the conversations, the processes, and the people who can drive real impact. Thank you all for your time today.

Presented At Discovery Summit 2025

Presenter

Skill level

Beginner
  • Beginner
  • Intermediate
  • Advanced

Files

Published on ‎07-09-2025 08:59 AM by Community Manager Community Manager | Updated on ‎10-28-2025 11:41 AM

This presentation demonstrates how JMP was instrumental in creating significant improvements within a call center that was facing challenges in understanding customer issues and optimizing service delivery. By using JMP's intuitive tools, including variability analysis, ANOVA, and Graph Builder, the team was able to create call categories and subcategories to bring clarity to the customer experience.

Through straightforward text exploration and descriptive analysis, we uncovered key patterns in customer inquiries without resorting to complex statistical modeling. The focus was on identifying and targeting the largest sources of variability to implement changes in agent training and knowledge resources.

This presentation gives attendees practical insights into applying JMP for process optimization in customer service, demonstrating how to effectively visualize and communicate data-driven findings to improve key performance indicators. We also share practical tips for using JMP's text analytics capabilities to glean insights from customer interactions. 

 

 

Hi, everyone. My name is Sara Handa. I am a Senior Manager of Process Improvement at Asurion, and I'm going to be talking to you today about how I used JMP to go from call center complexity to data-driven clarity.

We've all probably called into a call center before. Sometimes it's smooth. You get your issue resolved quickly, and you're on your way. Other times, not so much. You wait, repeat yourself, get transferred, and wonder why it's taking so long. At Asurion, our job is to make sure every customer has a good experience. But we also have to resolve issues efficiently.

That balance is tough because every customer issue is different. Some calls are simple, others are complex. We asked ourselves, what's causing that complexity? Why do some calls take 2 minutes and others an hour and 20? The question launched our journey into the data.

Today, I'll walk you through how we used JMP to go from messy, unpredictable call center data to clarity, and how you can apply the same thinking to your own operations. Let me pull up a very quick presentation that will just orient everybody here to what call centers are and what we are going to cover in the presentation today.

Here's our roadmap for our time together today. We'll start inside the call center, dig into the data, uncover what's driving variability, and then walk through how we turned those insights into real operational impact. Let's begin with the call center itself, because it's more than just a place where phones ring. It's the front line of customer experience.

At Asurion, our call centers handle millions of customer interactions every year. From activating new devices to troubleshooting complex tech issues. Each call is a moment of truth. It's where we either solve the problem quickly and build trust, or we miss the mark and risk losing a customer.

But here's the challenge. No two calls are the same. Some are straightforward. Others are layered with technical issues, emotional frustration, or unclear expectations. That variability makes call centers incredibly complex to manage. That's where data comes in. Even small improvements in how we handle calls can lead to big gains in efficiency, better workforce planning, and stronger customer outcomes.

We asked, how do we measure success in this environment? Our key metric is called, Call Resolution Time, or CRT, as it will be referenced in this talk today. It measures how long it takes to solve the customer's issue, but it's not just about speed. It's about solving the problem right the first time. When CRT varies wildly, it's a signal that something deeper is going on.

This is where JMP became essential. We used it to dig into the data, understand what drives CRT variability, and uncover opportunities to improve both customer experience and operational performance. Before we dive into the analysis, I want to just give you a very simple example of a call we might receive.

Imagine trying to activate your new phone after an upgrade. I'm sure many of us have a story about something going wrong and spending a full day trying to get it working. That scenario plays out every day in our call center. It's a good reminder that even simple calls can become complex. That's why understanding our CRT variable is so important.

If we look at this call example, our customer is calling in because they got a new phone. They're trying to activate it, but they're running into an issue where it's telling them the SIM is not detected. Our call center expert is checking the SIM card placement, asking the customer to remove it and reinsert it.

The first issue that's going wrong with this call, the customer says they've done that, but they're still receiving the same error. No problem. Our call center expert then runs a quick compatibility check, tries to understand the model of the device, which is confirmed by the customer, and then they push the resolution to that error. Pushing a network refresh, which ended up resolving the issue. Our customer was happy the device was activated, and both expert and customer can go on with their day.

This is a very straightforward call. This would probably take a short amount of time, but our calls range again from 2 minutes to an hour and a half. Things can go wrong. We can run into one issue on a call, we can run into 10 issues on a call. Again, introducing a lot of variability into our ecosystem.

In this talk today, like I said, we are going to focus on CRT variability, but we know there are other success metrics that affect call center operations. Namely, sales, customer experience, employee experience, and resolution rate. CRT variability is not the only factor we analyzed when we ran this analysis at our company. But to keep things efficient today, that's what we'll be zeroing in on.

Enough PowerPoint presentation, I think for now. Let's jump into JMP. The first thing I want to share is that Asurion is not a statistical analysis company, and I am by no means a statistician. We don't use JMP to drive a statistical process. We really use it as a visual tool to understand what's going on in our call centers and to be more targeted with our improvements. The first step in any analysis I run is to simply look at the data.

First for me, I run a distribution. Here we're looking at call resolution time and asking what's the distribution? Let me show you quickly how I set up this data. Again, just opened the distribution platform, pulled CRT into the Y column and ran the distribution. This is the output. It gives us a histogram, a box plot, and then summary statistics over to the right.

As you can see, the histogram has a very long right tail. That means some calls are taking much longer than most, which suggests the process isn't in control. We can also confirm that by looking at the median at 1,161 and the mean at 1,234. The median being almost 100 seconds less than the mean, again points to a right skew in the data set.

Another key metric is the standard deviation. It's over 600 seconds, almost half the mean, which signals a very widespread in performance, which again you can see in the histogram, you can see in the box plot, and you can confirm with the standard deviation. From this distribution, what do we know? We've confirmed CRT is not consistent and that there's large variation. From here the key question becomes what's driving this variation?

To dig into that, I use the Fit model platform. Let me again show you how I set up this data. Within the Fit model platform, CRT is our dependent variable. Then we selected three independent variables: call category, tenure, and site to understand the statistical impact of each of these three independent variables on CRT. We clicked Run here, and this is the output of that Fit model.

The effect summary at the top shows the log worth values. Here in the middle and the effect summary, the log worth values are a transformed measure of statistical significance. The higher the log worth value, the stronger evidence against the null hypothesis.

In this case, the effect summary shows that call category has the strongest influence with 48, followed by tenure and then site. From this model we can now say the type of call or the call category matters the most to CRT variability in these three independent variables. Knowing that, we now want to go deeper and ask, is there a specific category that's driving variability?

To answer that question, we use the ANOVA platform. This measures the differences in means between categories. Just to orient us on how we ran this, we pulled CRT into the response variable and call category as the X factor, and then clicked OK. Again, this is the output of that ANOVA model.

Looking at this graph at the top, we can see clear differences between call category in terms of both mean and standard deviation, which tells us that not all call types behave the same. We can confirm that when we look at the means and standard deviation graph or chart here at the bottom. We can see the mean for each call category and the significant difference shown here. We can also see the large variation in standard deviation represented by the spread in the data points up in the graph.

Device activation stands out. It has the most, sorry, it has the highest average CRT and the widest spread. The calls are not only longer, but they're more inconsistent when we compare them to the rest of our call categories. We've identified device activation as a subcategory to potentially target. This is where most of the long and variable CRTs are happening. But to confirm this, we ran a Tukey HSD test.

To do that, you would select the red drop-down arrow, come to compare means, and check all pairs Tukey HSD. When I ran that, this portion of the analysis appears, so the means comparison with a couple of different components. This test compares all possible pairs of group means to determine if they are significantly different from each other.

Looking at the HSD threshold matrix, the positive numbers are statistically significant. If we look at this matrix and the comparison between subcategories, we see Device Activation across the board is the only call category where all numbers are positive, meaning they are statistically significantly different from other call categories. This confirms it's not just slightly higher, it's consistently and significantly different.

With this knowledge, now we ask is this a recent issue or has it been consistent over time? To answer that, we turned to a control chart. To set up this data, we pulled CRT into our Y variable, date into the subgroup, and call category into the phase. The result of that is our control chart. Each point on this chart represents CRT performance for calls in that category. As you can see across the top for that date. As you can see across the bottom. The green line represents the mean for that category, and the red lines are the control limits. You can see the same data in this summary over to the side.

Notice how Device Activation's points are spread much wider than other categories. This is both in the mean and the range. This tells us the problem isn't a one-time spike. The high variability is consistently inconsistent over time. If we want to improve CRT, this is where we should focus first.

Even targeting just Device Activation call category, it is still a large number of calls, and there is still a lot of variability within that subcategory. We knew we needed more precision to know what improvement to target to drive value.

To do this, we ran a gauge variability chart. This helps us understand not just average performance within device activation, but the spread and consistency of CRT outcomes across site and tenure are two remaining independent variables. Let me show you what this looks like to set up.

Within the gauge variability model. CRT is our Y response variable, and we grouped our X values, our tenure, and site. Again, this is the output, and what we found was interesting. If we look at the variability summary here, it shows the mean and the standard deviation for all combinations of independent variables, as well as individually their output.

What we found was that new hires have a higher CRT than tenured experts. That's to be expected. They are new. They don't have as much knowledge. They haven't practiced this process as much. We were expecting their mean to be higher.

What was interesting to us if we look again is that their standard deviation is lower than our tenured population. This suggests that new hires take longer on average, but their performance is more consistent. At this point, the key question shifted for us; it was no longer something we could answer purely through data. The variability gauge gave us valuable insight, but it raised deeper operational questions.

To name a few. What's happening on the ground? How are tenured experts trained? What informal knowledge do they rely on? What does the actual activation process look like in practice? To answer those, we needed to move beyond JMP and into the field. We held a Kaizen event.

This event brought together frontline experts, process owners, and cross-functional leaders to map out the actual activation experience. What we found was eye-opening. As you can see in this picture here at the top of the page, we were mapping out the current state activation process with sticky notes and noting any pain points, any deviations from that process. As you can see, there is a lot there on the page.

What we found was that experts were executing different activation processes on nearly every call. Tool usage was inconsistent. Many relied heavily on tribal knowledge rather than standardized workflows. Even more striking, we found experts' understanding of the activation process didn't match customer expectations, and that misalignment was a key source of friction.

Through the Kaizen, we uncovered a clear opportunity. If we were able to provide a unified vision for activation, we could impact multiple KPIs, customer experience, resolution time, sales, CRT variability. This was a huge turning point where we moved from analysis to action.

Coming out of the Kaizen, we had clarity on the opportunity. We needed to build a guided flow for experts to allow them to walk through a consistent, optimal path for device activation. We built a guided flow based on the success criteria uncovered through our JMP analysis and through our frontline agents insights.

The flow acts like a decision tree. It's streamlined, it's purposeful, it leverages existing customer data, prompts the right questions early to set expectations, and includes error handling when standard steps don't work. The goal was to reduce variability, align the experience and help every expert deliver a successful activation every time.

At this point I want to remind us where we started with the device activation distribution. Let me go back and open our original distribution where we started our conversation together. Remember, we had a long right skew on our median was around 1,161 which was lower than our mean and our standard deviation was around 600 seconds.

Let's now use a local data filter to filter call category specifically to device activation and see how our original performance looked. Here you can see this graph change pretty significantly. There's no clear shape, median is lower still than our mean, so sitting at 1,570 versus 1,620 again indicating that the data is right skewed and our standard deviation jumped for device activation to 1,000 seconds, indicating a highly inconsistent process.

After we implemented the guided flow, let me show you what our standard deviation shifted to. As you can see, the mean dropped very close to our median, indicating a much more standardized process post-change. We also saw our standard deviation change from close to 1,000 seconds to 532, a huge shift in performance, creating much less CRT variability and much more standardization within our process.

With these results, what did it lead to? Lower CRT meant we could handle more calls without increasing staffing. It's a direct efficiency gain. Just as important, the guided flow improved the customer experience. There was less frustration, more satisfaction, and stronger loyalty.

Finally, by encouraging tool usage over memory-based troubleshooting, we saw real improvement in consistency. The CRT distribution reflects that is a tighter spread, fewer outliers, and a more predictable experience for both customers and experts.

To wrap up, what started as a question about call complexity led us through a journey of data exploration, operational insight, and real-world change. By combining JMP analysis with frontline collaboration, we didn't just reduce CRT; we improved consistency, customer experience, and team alignment.

This approach is something you can take back to your own organization. Start with the data, but don't stop there. Let it guide you to the conversations, the processes, and the people who can drive real impact. Thank you all for your time today.



Start:
Thu, Oct 23, 2025 12:30 PM EDT
End:
Thu, Oct 23, 2025 01:15 PM EDT
Trinity B
Attachments
0 Kudos