The process of drying a product in a fluid bed using non-conditioned air is complex and unpredictable, due in part to the variability of the incoming products. As a result, non-optimal settings were used too often, resulting in rework and a plethora of accompanying ails, including added costs, extra work, and increased pressure to meet production schedules. Attempts at modelling the drying time as a function of air and temperature conditions, with the aim of optimising the drying parameters, were only partially successful.

However, a breakthrough came when Functional Data Explorer in JMP Pro was used to include the drying profile of the individual product within the first 15 minutes of the process in the model. Operators could then use the results to predict the minimum drying time and adjust machine settings accordingly. Using FDE to generate insight into how different parameters impact results in-process has been extremely valuable. This talk should be useful for anyone who measures process information by demonstrating how to deliver this information to operators so they can adjust machine settings midprocess to maximize outcomes.

Hello and welcome to this presentation. I am Jacob from Olympic and with me is my colleague [inaudible]. We will do a presentation on the in-process optimization of a drying procedure using functional data explorer.

First we will go through the background for the project. And then afterwards, [inaudible] will take you into the more statistical depth of what we have done. We hope you find it interesting.

First a little about Lundbeck. Lundbeck is a company that has been around since 1915, and we are dedicated to brain health. So any kind of medicine that restores some brain health.

Approximately 5400 employees around the world present in the 50 plus countries. And a history that goes way back. Lundbeck has a full supply chain ranging all the way from API production and all the way out to the wholesalers. And that also includes CMOs partners, a lot of suppliers of all the materials going in. Our products is on a 100 plus markets around the world. So a fairly complex supply chain.

Today we will focus on a project that we have done in our bulk production where we produce 2.5 billion approximately tablets a year. So the case description is we have a granulation process. I will get into what that is in a minute. Where we try to granulate using what's called a fluidbed.

With a lot of air coming in from the outside. The air is only filtered. And we know from our small local weather station that the weather condition and the air condition changes a lot over 24 hours.

Our operators have had issues and figuring out for how long should I try my process or my product today in order to be within our internal specifications? There are some small JMP figures up here in the corner. But at this point in time, we observed the 26% of the batches to be below our internal limits.

That's a little on the high side. We started out by seeing if we could mitigate this problem and help our very nice operators to run the process more smoothly.

What is the granulation process or fluidbed drying? We have some wet powder in a bed, and then we have a lot of air coming in. We heat up the air to a specific temperature, and we have a certain amount of air going through, drying the water away, and then it goes out again through some filters, so we don't lose all our power in a fluid. But you can also do granulation. That is not for this specific case, but that could also be a thing.

In newer versions of fluidbeds, you will have some air conditioning unit that can adjust the moisture of the incoming air to a specific set point. But in this current setup we have, that is not possible, and because of physical constraints, it's not even possible to buy it and add it.

Because the setup is a little bit on the old side, we have had to install sensors ourselves in order to get enough data out of the machines. Otherwise, they will just print all the results on paper, and that is not optimal for what we're going to do.

Furthermore, we have gathered all the data to do all of this. We have the fluidbed data. We have the weather data over here to the left. We have also the in-process control data. What we do our target, that is all of those data is coming in to a data lake.

On top of that, we have made a small application where we can show and display the operators whatever we want to do to get displayed. This is just the flow of all that data.

We have started off with a simple model where we just predicted for how long time, based on the weather outside that didn't turn out to be a good model. That is why [inaudible] will take over and explain how she's used functional data explorer to get us closer to something that is helping us.

Yes. Thanks, Jacob. We want to have a schematic, look at the whole system. As Jacob mentioned, that we had this weather station that every day could give us information about the weather or the air going to this fluidbed.

Then based on that, we could estimate, for example, the operators. How long do they need to set up the machine to try this out, granulate to hit the target? But there is so much issue regarding the seasonal change in the weather condition, and also because of some utilities that we have, such as colder that for example, in the summertime or winter time, we shut down or turn off.

We had a lot of variation between the coming air to this fluidbed. It was quite challenging to predict these drying time just based on the data or information we get from these… I would say weather station down here.

What we have done initially, as the first investigation step is that we installed some IOT sensors just before the fluidbed just at the air that goes to the fluidbed. Also, some sensors add the fluidbed, it was a product temperature and also some sensors at the exhaust air.

With that, we could directly measure the air properties that goes in the tank and comes out of the fluidbed. But that means that we have a lot of data. We do have a time series data. Each of these sensors, in total we have eight sensors are sending information every seconds to our data lake.

Then with that, you should be able to first of all understand what's going on during the drying. What's the difference between the one batch to another batch and then also collect more information? Ultimate goal was how we can use this data to have a better prediction of drying during this setup.

The idea was simple. We had a lot of historical data that we collected in our data lake, and then wanted to pull out all the data including all those IOT data and also the weather data.

With that, we wanted to train our model and then use that model in our… I would say, platform that we call it prod intel. Then the first model was we collect all the information just before starting the drying, then push it to the model, and then the operator would start based on the estimation that we have give them.

Before jumping further, let's have a look at the data or the IOT data that we have. As I mentioned, we do have… this is a historical data that we collected historically in the last year. Then we have IOT information from all these sensors that extends every second.

Let's have a look at one batch as an example. During this drying we do blow air in the fluidbed and then in a synchronized way, this air flow goes up and down and stays in average at the target set.

Then we control also the drying air temperature. Then you mean, during the process you can see how the product temperature increases during the process time. This is based on the elapsed time from the start of the drying.

Then you can see also how the exhaust air temperature also increases during the drying. That's perfect. But let's compare this batch to for example, another batch. I just go down to another batch.

This is another batch, as you can see in red. Clearly we see that even though that we try to control this system, this drying procedure can vary from batch to batch.

It can be very from the dryer air temperature. It can also vary in the profile of drying and can be seen both in product temperature and exhaust air temperature. This clearly show that you cannot, I mean, of course you can use to be able to use your historical data and also the data that you collect just before starting the drying to more or less estimate your target or the drying time that you need.

If you have a variation in the process, you don't see it unless you start the process. Then based on that you say, okay, my drying is following another trend, so I need another profile.

That is why we just came up with the second idea of how we can use the data during the drying to be able to estimate how should be the targets for reaching our specification for the final relative humidity of the product.

The idea was simple. We wanted to start the process, and during the drying we collected data from all the eight sensors. We more or less wait for 15 minutes of data during the drying. Then analyze those data, or just push it to the model that we built on. Then after 15 minutes, we can give a prediction to the operator. That target is for example the temperature of this or you should stop after ten minutes.

This prediction now is directly based on the performance of the drying and is based on that specific batch, and it's not based on some pre assumed condition.

How to do that… I would go back to this one. The good thing about JMP is one that has a very powerful tool for analyzing time series data, and with the Functional Data Explorer.

Unfortunately, this functionality is only available at the moment, as far as I know in the progression. It's a very powerful tool to analyze the time series data.

As an example, I go for… looking at what's the difference between batch to batch in this drying profile? I've got the function data explorer. For example, as an example, I want to look at my product. Then I have my elapsed time, and of course, I have my batch as the IT function.

What it does to this application is I just look at all these batches historically over time, and you try to find a general trend in the profile of all of them. You can see how nicely you can differentiate between batches and just compare all the profile separately.

What I want to do is that I want to be able to extract the trend or the profile of each of these signals and then use that as input for my model to predict what should be the target.

Since, as I mentioned, that in this specific case, we wanted to collect the 50 minutes of data and then based on that trend or based on that profile, be able to predict what would be the target. I should also just limit the time window that I'm looking at the data here.

In this functionality, you have a very nice option of cropping your data with this filter. For example, in this case I'm looking at uh 15 minutes. Also, I want to just narrow it a little bit also from the beginning to make sure that I'm not like my data is more clean before the analysis. I would just look at the data between 180 seconds and 900 seconds.

I do have it here. Then what I want to do, I want to look at what's the general trend in all these batches. I go to model and direct functional PCA. With this, as you can see. What JMP does that do is that it tries to find a general trend within all these batches, shows as a main function, and then extracts five different eigenvalues, which call it as a shape function to characterize the difference between each of these batches compared to the average level of all of these products.

Gives you a very nice information of each of these… each batches how much is differ from the average look. You can easily play with this… in the profiler down here, you can easily play with all the extracted FPC components to just see what's the effect of each of these changes.

For example, this is my elapsed time. Then I want to see if I have a higher FPC 1 component. What will the effect on my product average product temperature in regarding the trend or the profile? And even if I go more to the negative numbers, what would be the effect on my product temperature?

As a simple as like if I go higher with FPC1, it means that during this period, more or less my average level of the product temperature was lower, and if it was more negative, the average level was higher.

You can easily see the effect of each of these factors separately on the overall result of the product temperature or the trend or the profile over time. Now when I extract these components to be able to use it as input for my model, and then use that model to predict the target.

You have the option to save the result as a save the summary here, and then it gives you a very nice input about all the FPC components that, for example, I have here. Then of course it just associated to the different batch number.

As you can see here for each batch now I get one number for each of these FPC components. This would be my input for building up my model, because this is directly the information I extract out of the profile of each of these signals. Then based on that, I know how is the like. My new batch varies from the average look, and then based on that, move forward.

Let's go back to see if I'm going to model that. Then how would be. I just collected all the FPC components of all my signals together, and from this place on is quite easy. You just want to, for example, build up your general regression model or any other approach to build up your model.

For example, as an example here, I had the easy option to have my final signal, which was I wanted to predict my relative humidity. As an X, I had all the combination of different FC components as an input.

Then I could come up with the final estimation or my final model to be able to put it in my system. It's quite easy. Now, all this model or all this prediction is directly based on the profile of all these signals that we collect from the IOT sensors, and then we just set the target based on that.

From this point is all the matter is that we need to extract the model. After we built, we need to extract the model and comes back to first extraction of the model. But then how can I extract this LPC component for a new batch?

Because the biggest issue is that I cannot use JMP anymore afterwards. In my setup, I want to have everything in place in my product which is running on AWS, so I need to be able to extract both the model and also the FPC component.

Extraction of the model is quite easy, so I can do. But the problem is the exact extraction of FPC components is quite difficult. Unfortunately right now there is no straightforward way of extracting these components from JMP because now I want to have a new batch.

Now with this new batch, I want to be able to estimate what's my component to use this model. There is a very indirect way of extracting these components. Which is just going back to the fundamentals of how they calculate these FPC components.

If you look at the… if I find this one back, maybe I close it. Yes. It's here. If you go back to the table that you extracted from the FPC components, you see, you do have a column that it gives you the product, like the prediction or how it just tries to predict the signal using this FPC component.

That one is quite straightforward. What it tries to do is that it tries to build up a mean function over all the data, which in our case, it was all the historical data that we put in. Then it creates the shape function, and then it tries to adjust this shape function based on the batch to compensate for the difference between each batch and the average look.

What we can do is that we do have a historical data, and based on the historical data, we can extract the mean value over time, and also we can extract the shape function based on the over time based on this historical data.

Then after we have these two parameters, then we can just put extract it out and put it in our like platform. Then every time a new batch comes, we try to fit it in this model and calculate the FPC components.

How we can do that. I will come back to this case here. Every time you make the table you extract the summary of your FPC component. Actually you also get the shape function. They are just hidden here. All of these… if you look at uh the shape functions for example. You see also an equation in the back that tries to calculate the shape function over time.

What I want to do is that I just want to directly extract this one. Also, my mean value and just extract the profile over time. Let me just do some cleaning here. To just make it easier. This is my last time, which I just unhide it.

By default… and this is also extra, maybe I can just delete this one. By default, the table that you extract out, it calculates all this shape function at the center of the time range that you are putting in.

What I want to do I want to extract this shape function over 180 seconds to 900 seconds that I trimmed my data. I can easily just update this column. Since I want to make it from 900 seconds. I just update my elapsed time and I can easily get the numbers for my shape function over time.

You can see you can just have exactly the same input as, for example, what you have seen in the first window that you had here. Now I have these shape functions out. How can now use it? I just try to compile all of them together.

I have a prepared data table here. I extracted these shape functions, for example for product temperature based on the historical data, and I have my elapsed time. Now a new batch came. I just added in my table. Of course in the new platform it would be in AWS//. Does all these calculations.

Now have my new batch. The information of the signal that I got for further temperature between 100 to 900. By coming back to the equation that I explained before. Now I want to build up this model and then extract the FPC components.

What I simply do is that, first of all, I just find the difference between the signals that I have in the new batch at each time and the average numbers that I have. You get also this minimum function formula as the outcome of the table that you extract out of Episcopal. The residual is simply the difference between the signal and the average values that you calculated based on the historical dates. With that you just need to build the rest of the model, this part of the model.

Let's do that. What I built is that this is my residuals. For each of my functions, I want to extract the FPC components. Of course, I don't want to have any intercept. I want to deactivate my center polynomial to make sure that I get exactly the same extraction as the other one.

Then when I run this, and look at the expression you see for each of the shape function, you get one parameter which is representing your FPC components. This is the way to extract each of these components based on each of the shape function that you extracted from historical data. Then after that you put it as an input for your model.

Every time a new dataset comes, a new batch comes, you try to fit this one, extract the FPC component, and then put it as an input for the model. With that you should be able to hit the estimation that you need based on each.

Unfortunately, as I mentioned, this is not a straightforward way of to extract these FPC components. But this is one way to do that. This is the way that we approach using the functionality of FPC components to predict the performance of the system in our setup.

The final slide is a general conclusion. I think that is quite clear, at least in our case, that it's very difficult to predict the state of a system based on some default inputs, considering all the variation that the system exposed to and everything that happens in the system.

Of course, it's much better to bring the possibility of having an online control in the system if it's possible, but it's not also an easy way to do. The second option is that so functionality to explorer differently is a great tool to analyze time series data. However, right now is mainly available for the provision, and it's not easy to extract some of the elements that is there out of JMP.

Most of the system in industrial, maybe they don't have the possibility to directly connect to JMP. That really limits the functionality of this option that exists in JMP.

But it has given us a great insight to our process from an explorer point of view, it has. Where was actually the problem, and what was happening for each batch. Before this we were just looking at the average or something like that. So now we actually see…

More detail.

Yes. And the time series data. From a process point of view, it has been very interesting to see the analysis part of it.

Exactly. Yes. With that, thank you for listening and hope to see all there.

Presenters

Skill level

Intermediate
  • Beginner
  • Intermediate
  • Advanced
Published on ‎12-13-2023 11:13 AM by | Updated on ‎04-08-2025 09:13 AM

The process of drying a product in a fluid bed using non-conditioned air is complex and unpredictable, due in part to the variability of the incoming products. As a result, non-optimal settings were used too often, resulting in rework and a plethora of accompanying ails, including added costs, extra work, and increased pressure to meet production schedules. Attempts at modelling the drying time as a function of air and temperature conditions, with the aim of optimising the drying parameters, were only partially successful.

However, a breakthrough came when Functional Data Explorer in JMP Pro was used to include the drying profile of the individual product within the first 15 minutes of the process in the model. Operators could then use the results to predict the minimum drying time and adjust machine settings accordingly. Using FDE to generate insight into how different parameters impact results in-process has been extremely valuable. This talk should be useful for anyone who measures process information by demonstrating how to deliver this information to operators so they can adjust machine settings midprocess to maximize outcomes.

Hello and welcome to this presentation. I am Jacob from Olympic and with me is my colleague [inaudible]. We will do a presentation on the in-process optimization of a drying procedure using functional data explorer.

First we will go through the background for the project. And then afterwards, [inaudible] will take you into the more statistical depth of what we have done. We hope you find it interesting.

First a little about Lundbeck. Lundbeck is a company that has been around since 1915, and we are dedicated to brain health. So any kind of medicine that restores some brain health.

Approximately 5400 employees around the world present in the 50 plus countries. And a history that goes way back. Lundbeck has a full supply chain ranging all the way from API production and all the way out to the wholesalers. And that also includes CMOs partners, a lot of suppliers of all the materials going in. Our products is on a 100 plus markets around the world. So a fairly complex supply chain.

Today we will focus on a project that we have done in our bulk production where we produce 2.5 billion approximately tablets a year. So the case description is we have a granulation process. I will get into what that is in a minute. Where we try to granulate using what's called a fluidbed.

With a lot of air coming in from the outside. The air is only filtered. And we know from our small local weather station that the weather condition and the air condition changes a lot over 24 hours.

Our operators have had issues and figuring out for how long should I try my process or my product today in order to be within our internal specifications? There are some small JMP figures up here in the corner. But at this point in time, we observed the 26% of the batches to be below our internal limits.

That's a little on the high side. We started out by seeing if we could mitigate this problem and help our very nice operators to run the process more smoothly.

What is the granulation process or fluidbed drying? We have some wet powder in a bed, and then we have a lot of air coming in. We heat up the air to a specific temperature, and we have a certain amount of air going through, drying the water away, and then it goes out again through some filters, so we don't lose all our power in a fluid. But you can also do granulation. That is not for this specific case, but that could also be a thing.

In newer versions of fluidbeds, you will have some air conditioning unit that can adjust the moisture of the incoming air to a specific set point. But in this current setup we have, that is not possible, and because of physical constraints, it's not even possible to buy it and add it.

Because the setup is a little bit on the old side, we have had to install sensors ourselves in order to get enough data out of the machines. Otherwise, they will just print all the results on paper, and that is not optimal for what we're going to do.

Furthermore, we have gathered all the data to do all of this. We have the fluidbed data. We have the weather data over here to the left. We have also the in-process control data. What we do our target, that is all of those data is coming in to a data lake.

On top of that, we have made a small application where we can show and display the operators whatever we want to do to get displayed. This is just the flow of all that data.

We have started off with a simple model where we just predicted for how long time, based on the weather outside that didn't turn out to be a good model. That is why [inaudible] will take over and explain how she's used functional data explorer to get us closer to something that is helping us.

Yes. Thanks, Jacob. We want to have a schematic, look at the whole system. As Jacob mentioned, that we had this weather station that every day could give us information about the weather or the air going to this fluidbed.

Then based on that, we could estimate, for example, the operators. How long do they need to set up the machine to try this out, granulate to hit the target? But there is so much issue regarding the seasonal change in the weather condition, and also because of some utilities that we have, such as colder that for example, in the summertime or winter time, we shut down or turn off.

We had a lot of variation between the coming air to this fluidbed. It was quite challenging to predict these drying time just based on the data or information we get from these… I would say weather station down here.

What we have done initially, as the first investigation step is that we installed some IOT sensors just before the fluidbed just at the air that goes to the fluidbed. Also, some sensors add the fluidbed, it was a product temperature and also some sensors at the exhaust air.

With that, we could directly measure the air properties that goes in the tank and comes out of the fluidbed. But that means that we have a lot of data. We do have a time series data. Each of these sensors, in total we have eight sensors are sending information every seconds to our data lake.

Then with that, you should be able to first of all understand what's going on during the drying. What's the difference between the one batch to another batch and then also collect more information? Ultimate goal was how we can use this data to have a better prediction of drying during this setup.

The idea was simple. We had a lot of historical data that we collected in our data lake, and then wanted to pull out all the data including all those IOT data and also the weather data.

With that, we wanted to train our model and then use that model in our… I would say, platform that we call it prod intel. Then the first model was we collect all the information just before starting the drying, then push it to the model, and then the operator would start based on the estimation that we have give them.

Before jumping further, let's have a look at the data or the IOT data that we have. As I mentioned, we do have… this is a historical data that we collected historically in the last year. Then we have IOT information from all these sensors that extends every second.

Let's have a look at one batch as an example. During this drying we do blow air in the fluidbed and then in a synchronized way, this air flow goes up and down and stays in average at the target set.

Then we control also the drying air temperature. Then you mean, during the process you can see how the product temperature increases during the process time. This is based on the elapsed time from the start of the drying.

Then you can see also how the exhaust air temperature also increases during the drying. That's perfect. But let's compare this batch to for example, another batch. I just go down to another batch.

This is another batch, as you can see in red. Clearly we see that even though that we try to control this system, this drying procedure can vary from batch to batch.

It can be very from the dryer air temperature. It can also vary in the profile of drying and can be seen both in product temperature and exhaust air temperature. This clearly show that you cannot, I mean, of course you can use to be able to use your historical data and also the data that you collect just before starting the drying to more or less estimate your target or the drying time that you need.

If you have a variation in the process, you don't see it unless you start the process. Then based on that you say, okay, my drying is following another trend, so I need another profile.

That is why we just came up with the second idea of how we can use the data during the drying to be able to estimate how should be the targets for reaching our specification for the final relative humidity of the product.

The idea was simple. We wanted to start the process, and during the drying we collected data from all the eight sensors. We more or less wait for 15 minutes of data during the drying. Then analyze those data, or just push it to the model that we built on. Then after 15 minutes, we can give a prediction to the operator. That target is for example the temperature of this or you should stop after ten minutes.

This prediction now is directly based on the performance of the drying and is based on that specific batch, and it's not based on some pre assumed condition.

How to do that… I would go back to this one. The good thing about JMP is one that has a very powerful tool for analyzing time series data, and with the Functional Data Explorer.

Unfortunately, this functionality is only available at the moment, as far as I know in the progression. It's a very powerful tool to analyze the time series data.

As an example, I go for… looking at what's the difference between batch to batch in this drying profile? I've got the function data explorer. For example, as an example, I want to look at my product. Then I have my elapsed time, and of course, I have my batch as the IT function.

What it does to this application is I just look at all these batches historically over time, and you try to find a general trend in the profile of all of them. You can see how nicely you can differentiate between batches and just compare all the profile separately.

What I want to do is that I want to be able to extract the trend or the profile of each of these signals and then use that as input for my model to predict what should be the target.

Since, as I mentioned, that in this specific case, we wanted to collect the 50 minutes of data and then based on that trend or based on that profile, be able to predict what would be the target. I should also just limit the time window that I'm looking at the data here.

In this functionality, you have a very nice option of cropping your data with this filter. For example, in this case I'm looking at uh 15 minutes. Also, I want to just narrow it a little bit also from the beginning to make sure that I'm not like my data is more clean before the analysis. I would just look at the data between 180 seconds and 900 seconds.

I do have it here. Then what I want to do, I want to look at what's the general trend in all these batches. I go to model and direct functional PCA. With this, as you can see. What JMP does that do is that it tries to find a general trend within all these batches, shows as a main function, and then extracts five different eigenvalues, which call it as a shape function to characterize the difference between each of these batches compared to the average level of all of these products.

Gives you a very nice information of each of these… each batches how much is differ from the average look. You can easily play with this… in the profiler down here, you can easily play with all the extracted FPC components to just see what's the effect of each of these changes.

For example, this is my elapsed time. Then I want to see if I have a higher FPC 1 component. What will the effect on my product average product temperature in regarding the trend or the profile? And even if I go more to the negative numbers, what would be the effect on my product temperature?

As a simple as like if I go higher with FPC1, it means that during this period, more or less my average level of the product temperature was lower, and if it was more negative, the average level was higher.

You can easily see the effect of each of these factors separately on the overall result of the product temperature or the trend or the profile over time. Now when I extract these components to be able to use it as input for my model, and then use that model to predict the target.

You have the option to save the result as a save the summary here, and then it gives you a very nice input about all the FPC components that, for example, I have here. Then of course it just associated to the different batch number.

As you can see here for each batch now I get one number for each of these FPC components. This would be my input for building up my model, because this is directly the information I extract out of the profile of each of these signals. Then based on that, I know how is the like. My new batch varies from the average look, and then based on that, move forward.

Let's go back to see if I'm going to model that. Then how would be. I just collected all the FPC components of all my signals together, and from this place on is quite easy. You just want to, for example, build up your general regression model or any other approach to build up your model.

For example, as an example here, I had the easy option to have my final signal, which was I wanted to predict my relative humidity. As an X, I had all the combination of different FC components as an input.

Then I could come up with the final estimation or my final model to be able to put it in my system. It's quite easy. Now, all this model or all this prediction is directly based on the profile of all these signals that we collect from the IOT sensors, and then we just set the target based on that.

From this point is all the matter is that we need to extract the model. After we built, we need to extract the model and comes back to first extraction of the model. But then how can I extract this LPC component for a new batch?

Because the biggest issue is that I cannot use JMP anymore afterwards. In my setup, I want to have everything in place in my product which is running on AWS, so I need to be able to extract both the model and also the FPC component.

Extraction of the model is quite easy, so I can do. But the problem is the exact extraction of FPC components is quite difficult. Unfortunately right now there is no straightforward way of extracting these components from JMP because now I want to have a new batch.

Now with this new batch, I want to be able to estimate what's my component to use this model. There is a very indirect way of extracting these components. Which is just going back to the fundamentals of how they calculate these FPC components.

If you look at the… if I find this one back, maybe I close it. Yes. It's here. If you go back to the table that you extracted from the FPC components, you see, you do have a column that it gives you the product, like the prediction or how it just tries to predict the signal using this FPC component.

That one is quite straightforward. What it tries to do is that it tries to build up a mean function over all the data, which in our case, it was all the historical data that we put in. Then it creates the shape function, and then it tries to adjust this shape function based on the batch to compensate for the difference between each batch and the average look.

What we can do is that we do have a historical data, and based on the historical data, we can extract the mean value over time, and also we can extract the shape function based on the over time based on this historical data.

Then after we have these two parameters, then we can just put extract it out and put it in our like platform. Then every time a new batch comes, we try to fit it in this model and calculate the FPC components.

How we can do that. I will come back to this case here. Every time you make the table you extract the summary of your FPC component. Actually you also get the shape function. They are just hidden here. All of these… if you look at uh the shape functions for example. You see also an equation in the back that tries to calculate the shape function over time.

What I want to do is that I just want to directly extract this one. Also, my mean value and just extract the profile over time. Let me just do some cleaning here. To just make it easier. This is my last time, which I just unhide it.

By default… and this is also extra, maybe I can just delete this one. By default, the table that you extract out, it calculates all this shape function at the center of the time range that you are putting in.

What I want to do I want to extract this shape function over 180 seconds to 900 seconds that I trimmed my data. I can easily just update this column. Since I want to make it from 900 seconds. I just update my elapsed time and I can easily get the numbers for my shape function over time.

You can see you can just have exactly the same input as, for example, what you have seen in the first window that you had here. Now I have these shape functions out. How can now use it? I just try to compile all of them together.

I have a prepared data table here. I extracted these shape functions, for example for product temperature based on the historical data, and I have my elapsed time. Now a new batch came. I just added in my table. Of course in the new platform it would be in AWS//. Does all these calculations.

Now have my new batch. The information of the signal that I got for further temperature between 100 to 900. By coming back to the equation that I explained before. Now I want to build up this model and then extract the FPC components.

What I simply do is that, first of all, I just find the difference between the signals that I have in the new batch at each time and the average numbers that I have. You get also this minimum function formula as the outcome of the table that you extract out of Episcopal. The residual is simply the difference between the signal and the average values that you calculated based on the historical dates. With that you just need to build the rest of the model, this part of the model.

Let's do that. What I built is that this is my residuals. For each of my functions, I want to extract the FPC components. Of course, I don't want to have any intercept. I want to deactivate my center polynomial to make sure that I get exactly the same extraction as the other one.

Then when I run this, and look at the expression you see for each of the shape function, you get one parameter which is representing your FPC components. This is the way to extract each of these components based on each of the shape function that you extracted from historical data. Then after that you put it as an input for your model.

Every time a new dataset comes, a new batch comes, you try to fit this one, extract the FPC component, and then put it as an input for the model. With that you should be able to hit the estimation that you need based on each.

Unfortunately, as I mentioned, this is not a straightforward way of to extract these FPC components. But this is one way to do that. This is the way that we approach using the functionality of FPC components to predict the performance of the system in our setup.

The final slide is a general conclusion. I think that is quite clear, at least in our case, that it's very difficult to predict the state of a system based on some default inputs, considering all the variation that the system exposed to and everything that happens in the system.

Of course, it's much better to bring the possibility of having an online control in the system if it's possible, but it's not also an easy way to do. The second option is that so functionality to explorer differently is a great tool to analyze time series data. However, right now is mainly available for the provision, and it's not easy to extract some of the elements that is there out of JMP.

Most of the system in industrial, maybe they don't have the possibility to directly connect to JMP. That really limits the functionality of this option that exists in JMP.

But it has given us a great insight to our process from an explorer point of view, it has. Where was actually the problem, and what was happening for each batch. Before this we were just looking at the average or something like that. So now we actually see…

More detail.

Yes. And the time series data. From a process point of view, it has been very interesting to see the analysis part of it.

Exactly. Yes. With that, thank you for listening and hope to see all there.



0 Kudos