Image Classification Example using JMP Pro Deep Learning Add-in (2025-US-PO-2460)

At consumer product companies, it is important to develop forecasting stability models to predict product shelf life or long-term stability failure risk at the early stage of product development. Time-lapse photography (TLP) has been established as principal tool to measure physical stability of liquid consumer products.

An AI/deep learning model had been previously developed using Python to predict physical stability failure modes from TLP images. With the availability of the new Torch Deep Learning Add-In for JMP Pro, we can quickly develop an AI/deep learning model to detect physical stability failures from images without any programming. In this presentation, we share the application of add-in to an image classification problem to detect physical stability failures of formulated consumer products.

 

 

My name is Fangyi Luo. I'm a statistician and also a data scientist at Procter and Gamble. Today, I will be presenting a poster with Russ Wolfinger from JMP. I've been with P&G for about 28 years, and, currently, I'm focusing on accelerating product and packaging innovation using statistics and data science method. Russ, do you want to briefly introduce yourself?

Yes. Thank you, Fangyi. Excited to work with you on this project. As you know, AI machine learning is a really hot topic, and we love practical applications like this, which, I think, are really making a difference now, and the ability to use the new Torch Add-in for image classification. I think it's very powerful and practical. This will be a really nice story that we've got today to share how you've been able to make progress with it. I know I'm personally really excited to see it go. This project's been several years in the making, and it's really starting to gain some traction now.

Thank you, Russ. Today, we're going to share Image Classification problem using the new JMP Torch Deep Learning Add-in. First, I'm going to talk about the background of the problem. At Procter and Gamble, we make lots of consumer products, and some of the consumer products are liquid-formulated products, like shampoo, conditioner, liquid detergent, hand dish soap, and etcetera.

Before we launch a product to the market, we have to do a lot of testings on the prototype product. We use a technique called time-lapse photographic technique to track the images of the prototype product placed at different temperatures over time. On the right here, you can see some of the stability failure mode for some of the liquid, formulated prototype products, like phase separation, creamy. There's a thin layer on the top. This is fluctuation, crack, and crystallization.

Because we tested thousands of prototype formulated products. We use this digital camera to track all of the prototype products over time. We captured millions of images. A manual classification of these images into different failure mode were impossible. A few years ago, we formed a team at PNG, and we developed a Deep Learning algorithm to classify these images into stable and different physical stability failure modes using Deep Learning Model developed by Python.

With the new availability of JMP Pro Torch Deep Learning, I think we want to try applying this, adding to our image classification problem.

We tried this image classification on our problem using both binary classification, which is classifying our images into stable, unstable classes. We also tried classifying our images into multiple classes, like stable versus different physical stability failure modes. We tried the top Deep Learning Add-in on these two types of classification problems. We included about 3,500 images in the training set and over 500 images in the test set. Let me show you some of the results.

When we tried this Torch Deep Learning Add-in on the binary classification and, initially, I just used the default setting of the parameters in this Torch Deep Learning Add in, and I chose the image model, which is called ResNet-34. Very quickly, I was able to get a very good model with 95% prediction validation accuracy, and this very quick and easy. In the middle, you can see an example of the training history graph, and we want to see that the loss decrease in the number of epochs increase, like training iteration increase. Because our goal is to develop a model achieving 95% validation accuracy. This initial model was very good, but we want to see if we can continue to improve our model validation accuracy.

What I did is I tried, just manually to change some of the hyperparameters. I, tried to use a more complex image model like ResNet-50 and also try to decrease learning rate and increase the number of epochs. After a couple of tries, actually, I was able to quickly achieve a binary classification model with 99% validation accuracy. I was very excited to see these results. Then, for multi-class classification using the JMP Torch Deep Learning Add-in, this takes a little bit more time, and I was not able to get the good model initially.

I heard from Peter Hirsch from JMP that you have a Torch Companion Add-in, which can perform hyperparameter tuning, for the Torch Deep Learning Add-in. I downloaded this Add-in as well as you can see at the same place, marketplace.jmp.com. You can download both the Torch Deep Learning Add-in as well as Torch companion Add-in. I tried this Add-in, and it was able to design a design of experiment of the hyperparameters.

I used 20. It generated 20 combinations of hyperparameters, with different number of epochs, learning rate, batch size, number of layers, layer size, and also image size. I tried this Torch Companion Add-in, and the best model I can get for this multi-class classification had about 78% validation accuracy. Our goal is to achieve a model with about 95%, and higher validation accuracy, so, this model is not good enough. Then I tried manually tuning the model, using different hyperparameters. I tried different image model. I tried ResNet-50. Also, I tried to increase the number of epochs, decrease the learning rate, change the batch size, or change the image size.

For multi-class classification, it takes much longer to run. Here, I have a screenshot of the model still running after 1 day. Sometimes, it will tell me that my NVIDIA GPU is out of memory, but I kept trying, and after a number of tries, I was able to achieve a very good deep learning model to classify the images into multiple classes, like stable and different faded mode category. The validation accuracy was 95%. I was very happy with the results. Also, I got some suggestions from Russ that to reduce the memory, we can reduce the batch size or number of epochs. This is some of the learning we had, and let's see.

Let me go back. As you can see that it's very easy and quick to develop the binary classification model using the Torch Deep Learning Add-in, and we can use Torch Companion Add-in to do hyperparameter tuning. Also, we can develop a good model for, multi-class classification problem by manually tuning the hyperparameters. Russ, do you have anything you want to add?

Yeah, what a nice progression, Fangyi, and I know we have quite a few customers who are going down this path. This is not the typical quick analysis that we're used to in JMP, just because it's so much data and the calculations are really intense, and it's great that you're able to use your NVIDIA GPU. Do you happen to remember what the memory was on that GPU card, Fangyi?

I think it's over 200, 256 gigabytes.

Usually, the GPUs are, like, 12 or 16 gigabytes of GPU. I'm talking about the memory on the GPU, not the CPU. We'll have to look, but I do know that NVIDIA has so many different cards. It's hard to tell. We get a lot of folks asking nowadays about which card should I get, and the good thing is NVIDIA, because they're doing so well, they keep coming up with better and better cards at a cheaper cost. I've heard they've got a newer one coming. It's going to be even more powerful at a lower cost, so these run times we would expect to come down. It is all a function of, especially if you're doing bigger images.

Like, I think you ended up going up 2 to 4, which is a stock size. That's still a pretty good size, and you can start to really utilize the cards, and things get expensive. Anyway, that's why we have them. It's really nice that you're able to leverage that high-end computing power and with some patience and tuning. Thanks to Peter Hirsch and Scott Allen for that extra Add-in, which I know a lot of folks. It's just a really beautiful Add-in on top of Torch itself, lets you get to better models a little quicker and automate some of these things.

This is the thing, obviously, those long, long run times, want to let them go overnight sometimes, and then just come back the next day and see. To me, I think you really made the right steps in terms of both using the automation, but then a little bit of manual tuning and trying some things, I think is advisable. Not only it's often the way to get to the best model, but helps you learn a little bit more about the problem, just to see which factors are really driving improvements versus not so much. What I've seen is you often just can't tell. Sometimes one of those image models just seems to capture the nature of the images and can often do a lot better than other ones.

Sometimes they're all about the same, and then it's more a function of the image size or the training regime. You basically need to run some experiments just to try things, and that's the way we've set it up. It is important, I'm glad you showed the training loss curve, because it's important to keep an eye on that to make sure it descends and then flattens out. That's a nice-looking one there. Be careful, though, like the two cases where you wouldn't want to use it would be one where it's going down and then back up again, especially the solid curve.

That would indicate overfitting, which can happen sometimes, and then another case would be if it's still going down steadily, that's usually a case where you'd probably want to run more epochs until you get further convergence, but these are all little things, little tricks that you learn as you get in there. To me, it's really fun because this is a no-code interface, lets you quickly get into the modeling without getting bogged down with writing the Python code and having to change the code every time you want to try something a little different. Man, this is just a really great application, Fangyi. Congratulations. I'm really happy to see it.

Thank you, Russ. To finish up, here we have some references. This is a link to the documentation of JMP Torch Deep Learning Add-in and a link to the Torch Companion Add-in. Also, I presented a similar problem when we develop model using Python at statistical conference, ASA, IMS, Spring Research Conference couple of years ago.

Also, I want to thank my P&G team for working on this problem together in the past. Also, as Russ mentioned, we want to thank Peter Hirsch for sharing his storage companion Add-in. That's about it. So if any of you have any questions, you know who to contact. You can contact Russ or myself. It was really fun to learn this JMP Torch Deep Learning Add-in from Russ, and I think we can have many more applications at P&G. Thank you.

Great job, Fangyi. Thank you.

Presenters

Skill level

Advanced
  • Beginner
  • Intermediate
  • Advanced

Files

Published on ‎07-09-2025 08:58 AM by Community Manager Community Manager | Updated on ‎10-28-2025 11:41 AM

At consumer product companies, it is important to develop forecasting stability models to predict product shelf life or long-term stability failure risk at the early stage of product development. Time-lapse photography (TLP) has been established as principal tool to measure physical stability of liquid consumer products.

An AI/deep learning model had been previously developed using Python to predict physical stability failure modes from TLP images. With the availability of the new Torch Deep Learning Add-In for JMP Pro, we can quickly develop an AI/deep learning model to detect physical stability failures from images without any programming. In this presentation, we share the application of add-in to an image classification problem to detect physical stability failures of formulated consumer products.

 

 

My name is Fangyi Luo. I'm a statistician and also a data scientist at Procter and Gamble. Today, I will be presenting a poster with Russ Wolfinger from JMP. I've been with P&G for about 28 years, and, currently, I'm focusing on accelerating product and packaging innovation using statistics and data science method. Russ, do you want to briefly introduce yourself?

Yes. Thank you, Fangyi. Excited to work with you on this project. As you know, AI machine learning is a really hot topic, and we love practical applications like this, which, I think, are really making a difference now, and the ability to use the new Torch Add-in for image classification. I think it's very powerful and practical. This will be a really nice story that we've got today to share how you've been able to make progress with it. I know I'm personally really excited to see it go. This project's been several years in the making, and it's really starting to gain some traction now.

Thank you, Russ. Today, we're going to share Image Classification problem using the new JMP Torch Deep Learning Add-in. First, I'm going to talk about the background of the problem. At Procter and Gamble, we make lots of consumer products, and some of the consumer products are liquid-formulated products, like shampoo, conditioner, liquid detergent, hand dish soap, and etcetera.

Before we launch a product to the market, we have to do a lot of testings on the prototype product. We use a technique called time-lapse photographic technique to track the images of the prototype product placed at different temperatures over time. On the right here, you can see some of the stability failure mode for some of the liquid, formulated prototype products, like phase separation, creamy. There's a thin layer on the top. This is fluctuation, crack, and crystallization.

Because we tested thousands of prototype formulated products. We use this digital camera to track all of the prototype products over time. We captured millions of images. A manual classification of these images into different failure mode were impossible. A few years ago, we formed a team at PNG, and we developed a Deep Learning algorithm to classify these images into stable and different physical stability failure modes using Deep Learning Model developed by Python.

With the new availability of JMP Pro Torch Deep Learning, I think we want to try applying this, adding to our image classification problem.

We tried this image classification on our problem using both binary classification, which is classifying our images into stable, unstable classes. We also tried classifying our images into multiple classes, like stable versus different physical stability failure modes. We tried the top Deep Learning Add-in on these two types of classification problems. We included about 3,500 images in the training set and over 500 images in the test set. Let me show you some of the results.

When we tried this Torch Deep Learning Add-in on the binary classification and, initially, I just used the default setting of the parameters in this Torch Deep Learning Add in, and I chose the image model, which is called ResNet-34. Very quickly, I was able to get a very good model with 95% prediction validation accuracy, and this very quick and easy. In the middle, you can see an example of the training history graph, and we want to see that the loss decrease in the number of epochs increase, like training iteration increase. Because our goal is to develop a model achieving 95% validation accuracy. This initial model was very good, but we want to see if we can continue to improve our model validation accuracy.

What I did is I tried, just manually to change some of the hyperparameters. I, tried to use a more complex image model like ResNet-50 and also try to decrease learning rate and increase the number of epochs. After a couple of tries, actually, I was able to quickly achieve a binary classification model with 99% validation accuracy. I was very excited to see these results. Then, for multi-class classification using the JMP Torch Deep Learning Add-in, this takes a little bit more time, and I was not able to get the good model initially.

I heard from Peter Hirsch from JMP that you have a Torch Companion Add-in, which can perform hyperparameter tuning, for the Torch Deep Learning Add-in. I downloaded this Add-in as well as you can see at the same place, marketplace.jmp.com. You can download both the Torch Deep Learning Add-in as well as Torch companion Add-in. I tried this Add-in, and it was able to design a design of experiment of the hyperparameters.

I used 20. It generated 20 combinations of hyperparameters, with different number of epochs, learning rate, batch size, number of layers, layer size, and also image size. I tried this Torch Companion Add-in, and the best model I can get for this multi-class classification had about 78% validation accuracy. Our goal is to achieve a model with about 95%, and higher validation accuracy, so, this model is not good enough. Then I tried manually tuning the model, using different hyperparameters. I tried different image model. I tried ResNet-50. Also, I tried to increase the number of epochs, decrease the learning rate, change the batch size, or change the image size.

For multi-class classification, it takes much longer to run. Here, I have a screenshot of the model still running after 1 day. Sometimes, it will tell me that my NVIDIA GPU is out of memory, but I kept trying, and after a number of tries, I was able to achieve a very good deep learning model to classify the images into multiple classes, like stable and different faded mode category. The validation accuracy was 95%. I was very happy with the results. Also, I got some suggestions from Russ that to reduce the memory, we can reduce the batch size or number of epochs. This is some of the learning we had, and let's see.

Let me go back. As you can see that it's very easy and quick to develop the binary classification model using the Torch Deep Learning Add-in, and we can use Torch Companion Add-in to do hyperparameter tuning. Also, we can develop a good model for, multi-class classification problem by manually tuning the hyperparameters. Russ, do you have anything you want to add?

Yeah, what a nice progression, Fangyi, and I know we have quite a few customers who are going down this path. This is not the typical quick analysis that we're used to in JMP, just because it's so much data and the calculations are really intense, and it's great that you're able to use your NVIDIA GPU. Do you happen to remember what the memory was on that GPU card, Fangyi?

I think it's over 200, 256 gigabytes.

Usually, the GPUs are, like, 12 or 16 gigabytes of GPU. I'm talking about the memory on the GPU, not the CPU. We'll have to look, but I do know that NVIDIA has so many different cards. It's hard to tell. We get a lot of folks asking nowadays about which card should I get, and the good thing is NVIDIA, because they're doing so well, they keep coming up with better and better cards at a cheaper cost. I've heard they've got a newer one coming. It's going to be even more powerful at a lower cost, so these run times we would expect to come down. It is all a function of, especially if you're doing bigger images.

Like, I think you ended up going up 2 to 4, which is a stock size. That's still a pretty good size, and you can start to really utilize the cards, and things get expensive. Anyway, that's why we have them. It's really nice that you're able to leverage that high-end computing power and with some patience and tuning. Thanks to Peter Hirsch and Scott Allen for that extra Add-in, which I know a lot of folks. It's just a really beautiful Add-in on top of Torch itself, lets you get to better models a little quicker and automate some of these things.

This is the thing, obviously, those long, long run times, want to let them go overnight sometimes, and then just come back the next day and see. To me, I think you really made the right steps in terms of both using the automation, but then a little bit of manual tuning and trying some things, I think is advisable. Not only it's often the way to get to the best model, but helps you learn a little bit more about the problem, just to see which factors are really driving improvements versus not so much. What I've seen is you often just can't tell. Sometimes one of those image models just seems to capture the nature of the images and can often do a lot better than other ones.

Sometimes they're all about the same, and then it's more a function of the image size or the training regime. You basically need to run some experiments just to try things, and that's the way we've set it up. It is important, I'm glad you showed the training loss curve, because it's important to keep an eye on that to make sure it descends and then flattens out. That's a nice-looking one there. Be careful, though, like the two cases where you wouldn't want to use it would be one where it's going down and then back up again, especially the solid curve.

That would indicate overfitting, which can happen sometimes, and then another case would be if it's still going down steadily, that's usually a case where you'd probably want to run more epochs until you get further convergence, but these are all little things, little tricks that you learn as you get in there. To me, it's really fun because this is a no-code interface, lets you quickly get into the modeling without getting bogged down with writing the Python code and having to change the code every time you want to try something a little different. Man, this is just a really great application, Fangyi. Congratulations. I'm really happy to see it.

Thank you, Russ. To finish up, here we have some references. This is a link to the documentation of JMP Torch Deep Learning Add-in and a link to the Torch Companion Add-in. Also, I presented a similar problem when we develop model using Python at statistical conference, ASA, IMS, Spring Research Conference couple of years ago.

Also, I want to thank my P&G team for working on this problem together in the past. Also, as Russ mentioned, we want to thank Peter Hirsch for sharing his storage companion Add-in. That's about it. So if any of you have any questions, you know who to contact. You can contact Russ or myself. It was really fun to learn this JMP Torch Deep Learning Add-in from Russ, and I think we can have many more applications at P&G. Thank you.

Great job, Fangyi. Thank you.



0 Kudos