Choose Language Hide Translation Bar

Model-Driven Robust Design Monte-Carlo Approach to Cooking the Freshest & Tastiest Boiled Dumplings (2020-US-30MP-597)

Level: Intermediate

 

PATRICK GIULIANO, Senior Quality Engineer, Abbott
Charles Chen, Continuous Improvement Expert, Statistics-Aided Engineering (SAE), Applied Materials
Mason Chen, High School Student, Stanford Online High School

 

Cooked foods such as dumplings are typically prepared without precise process control on cooking parameters. The objective of this work is to customize the cooking process for dumplings based on various dumpling product types. During the cooking process in dumpling preparation, the temperature of the water and time duration of cooking are the most important factors in determining the degree to which dumplings are cooked (doneness). Dumpling weight, dumpling type, and batch size are also variables that impact the cooking process. We built a structured JMP DSD platform with special properties to build a predictive model on cooking duration. Internationally recognized ISO 22000 Food Safety Management and the Hazard Analysis Critical Control Point (HACCP) schemas were adopted. JMP Neural Fit techniques using modern data mining algorithms were compared to RSM. Results demonstrated the prevalence of larger main effects from factors such as: boiling temperature, product type, dumpling size/batch as well as interaction effects that were constrained by the mixture used in the dumpling composition. JMP Robust Design Optimization, Monte Carlo Simulation and HACCP Control Limits were employed in this design/analysis approach to understand and characterize the sensitivity of dumpling cooking factors on the resulting cooking duration. The holistic approach showed the synergistic benefit of combining models with different projective properties, where recursive partition-based AI models estimate interaction effects using classification schema and classical (Stepwise) regression modeling provides the capability to interpret interactions of 2nd order, and higher, including potential curvature in quadratic terms. This paper has demonstrated a novel automated dumpling cooking process and analysis framework which may improve process throughout, lower the cost of energy, and reduce the cost of labor (using AI schema). This novel methodology has the potential to reshape thinking on business cost estimation and profit modeling in the food-service industry.

 

 

Auto-generated transcript...

 


Speaker

Transcript

Patrick Giuliano All right. Well, welcome everyone.
  Thank you all for taking the time to watch this presentation.
  Preparing the Freshest Steamed Dumplings.
  And my name is Patrick Giuliano and my co authors are Mason Chen and Charles Chen from Applied Materials, as well as Yvanny Chang.
  Okay, so today I'm going to tell you about how I harnessed...my team and I harness the power of JMP to really understand about
  dumpling cooking.
  And so that the general problem statement here is that most foods like dumplings are made without precise control of cooking parameters.
  And the taste of a dumpling, and as well as other outputs to measure how good a dumpling is, is adversely affected by improper cooking time and this is intuitive to everyone who's enjoyed food so
  we needn't to talk too much about that. But
  Sooner or later AI and robotics will be an important part of the food industry.
  And our recent experience with Covid 19 has really highlighted that and and so I'm going to talk a little bit about how we
  can understand the dumpling process better using a very multi faceted modeling approach, which uses many of JMP's modeling capabilities, including robust Monte Carlo design optimization.
  So why dumplings?
  Well dumplings are very easy to cook.
  And by cooking them, of course, we kill any foreign particles that may be living on them.
  And cooking can involve very limited human interaction.
  So of course with that, the design and the process space related to cooking is very intuitive and extendable
  and we can consider the physics associated with this problem and try to use JMP to help us really understand and investigate the physics better.
  AI is really coming sooner or later because of Covid 19, of course, and
  why would robotic cooking of dumplings be coming? Well
  And also other questions might be, what are the benefits? What are the challenges of cooking dumplings in an automated way in a robotic setting?
  And of course, this could be a challenge because actually robots don't have the nose to smell. And so because of that, that's a big reason why, in addition to
  an advanced and multifaceted modeling approach, it's important to consider some other structured criteria.
  And later in this presentation, I'm going to talk a little bit, a little bit about the HACCP criteria and how we integrated that in order to solve our problem in a more structured way.
  Okay, so
  before I dive into a lot of the interesting JMP analysis, I'd like to briefly provide an introduction into heat transfer physics, food science and how different heat transfer mechanisms affect the cooking of dumplings.
  So as you can see in this slide, there's a
  Q
  at the top of the diagram and the upper right and that Q is referred to...it refers to the heat flux density, which is the amount of energy that
  flows through a unit area per unit time, and the direction of temperature change.
  From the point of view of physics, proteins and raw and boiled meat differ in their amounts of energy. An activation energy barrier has to be overcome in order to turn raw meat protein structure into a denatured or compactified structure as shown here.
  in this picture at the left.
  So the first task of the cook, when boiling meat in terms of physics, is to increase the temperature throughout the volume of the piece
  at least
  To reach the temperature of the denaturation.
  Later, I'm going to talk about the most interesting finding of this particular phase of the experiment where we discovered that there was a temperature cut off.
  And and intuitively, you would think that below a certain temperature dumplings would be cooked...wouldn't be cooked properly, they would be to soggy, and above a certain temperature, perhaps they would also be too soggy or they may be burned or crusty.
  One other final note about the physics here is that at the threshold for boiling, the surface temperature of the water fluctuates and bubbles will rise to the surface of the boiler
  and break apart and collapse and that can make it difficult to gather...capture and...excuse me...to capture accurate readings of temperature.
  So that leads us into some...what are some of the tools that we used to conduct an experiment?
  Well,
  of course, we used a boiling cooker and that's very important.
  Of course, we used a something to measure temperature and for this we used an infrared thermometer and we used a timer, of course, and we used a mass balance to weigh the dumpling and all the constituents going into the dumpling.
  We might consider something called Gare R&R in future studies and where we may quantify the repeatability and reproducibility of our of our measurement tools.
  In this experiment, we didn't, but that is very important, because this helps us maximize the precision of our model estimates by minute...minimizing the noise components associated with our measurement process.
  And those noise components could not only be
  a fact...a factor of say that accuracy tolerance for the gauge, but they they could also be due to how the person interacts with the with the measurement itself.
  And, and, in particular, I'm going to talk a little bit about the challenge with measuring boiling and cooking temperature at high at high temperature.
  Okay so briefly,
  this...we set this experiment up as a designed experiment. And so we had to decide on the tools first.
  We had to decide on how we would make the dumpling. So we needed a manufacturing process and appropriate
  role players in that process. And then we had to design a structured experiment. And to do that we use the definitive screening design
  and looked at some characteristics of the design to ensure that the design was performing optimally for our experiment.
  Next we executed the experiment.
  And then
  we measured and recorded the response.
  And of course, finally, the fun part,
  we got to effectively interpret the data and JMP.
  And these are graphs that the right here that are showing scatter plot matrices generated in JMP, just using the graph function.
  And these actually give us an indication of the uniformity that prediction space. I'll talk a little bit of more more about that later...then next...in the coming slides.
  Okay, so here's our data. Here's our data collection plan and at the right is the response that we measured, which is a dumpling rising time or cooking time.
  We collected 18 runs in a DSD, which we generated in JMP using the DSD platform and in under the DOE menu.
  And we collected information on the mass of the meat, the mass of the veggies going in, the mass of the...the type of the meat, rather, the composition of the vegetables (being either cabbage or mushroom),
  and of course the total weight, the sizes of the batch that we cooked, the number of dumplings per batch, and the water temperature.
  So this slide just highlights some of the the amazing power of a DSD and and I won't go into this too much, but DSDs are very lauded for their flexible and powerful modeling characteristics.
  And they allow the great potential for conducting Blitz screening and optimise optimization in a single experiment.
  This chart at the at the right is a correlation matrix generated in JMP, in its designed diagnostics platform of the DOE menu and it and it's it's particularly powerful for showing
  the extent of aliasing or confounding among all the factor effects in your model. And what this graphic shows clearly is that
  by the darkest blue, there's no correlation and, as the as the correlation increases and we get it to shades of gray, and then finally, as we get to very positive correlation, we get to shades of red. So what we're really seeing is that
  main effects are completely uncorrelated with each other, which is what we like to see in two factor interactions, the main effects are uncorrelated.
  With with quadratic effects which is up in this right quadrant. And then the quadratic effects are actually only partially correlated with each other and then you have these higher order interaction terms,
  which are which are really partially correlated with interaction effects and these types of characteristics make this particular design superior and to the typical Resolution III and Resolution IV fractional factorial designs that that we used to be taught
  before the DSDs.
  Okay. So just quickly discussing a little bit about the design diagnostics. Here's the same correlation...a similar correlation plot, except the factors have actually been given their particular names
  and after in...in running this DSD design. And this is just a gray and white version of of a correlation matrix to help you see the extent of orthoginality or
  not, not being ??? among the factors. And so what you can see in our experiment is that we actually did observe a little bit of
  confounding among a batch size a batch size and meat unsurprisingly, and then, of course, meat with the interaction between meat and the vegetables that are in the in the dumpling.
  And note that we imposed one design constraint here, which we did observe some confounding with, which is the very intuitive constraint in that the dumplings...than the total mass of the dumpling is the composition of each of the components of the dumpling itself.
  So,
  the other...
  so why, why are we doing this?
  Why are we assessing this quote unquote uniformity here, this the scatter plot matrix here and and what is this kind of telling us? Well,
  in order to maximize prediction capability throughout the space of the of the predictor, in rising time in this case, we want to find the combinations of the factors that minimize the white areas. Okay. And in the white areas are where the prediction accuracy is thought to be weaker.
  And this is a and this is why we take the design and we put it into a scatter plot matrix and this is analogous to sort of the homogeneity of error assumption in ANOVA,
  where you know we look for a space, the space of prediction to be equally probable, or the equal variance assumption in linear regression.
  When we want we want this space to be equally probable across the range of the predictors.
  So in in this in this experiment, of course, in order to reduce the number of factors that we're looking at, first we used engineering and our understanding of the engineering and the physics of the problem.
  And so for identification, we identified six independent variables or variables that were least confounded with each other and and and we proceeded and with the analysis on the basis of these primary variables.
  Okay.
  So the first thing we did is we we took our generated design and we use stepwise regression to try to simplify the model, identify only the active factors in the model.
  And here you can use a combination of forward selection, backwards selection, and mixed as your stopping criteria in order to determine the model that explains the most variation in your response.
  And also, I can model meat type as discrete and or numeric or...rather discrete numeric, and in this way I can use this labeling to make the factor coding to correspond to
  the meat type being the shrimp or the pork, which we used.
  So what kind of a stopping rule can you use in the framework of this type of a regression model? Well,
  unfortunately, when I ran this model again and again, I wasn't really able to reproduce it exactly. And model reproduction can be somewhat inconsistent, since the fit...this type of a fitting schema involves a computational procedure to iterate to a solution.
  And so therefore, in this stepwise regression, the overfit risk is is typically higher.
  And oftentimes if there's any curvature in the model
  or there are two factor interactions, for example, the, the explanatory variants across...is shared across both of those factors where you can't tease apart that variability associated with one or the other.
  And so what we can see here clearly, based on the adjusted R squared, is that we're getting a very good fit, and probably a fit that's too good.
  Meaning that we can't predict in the future based on them on the fit to this particular model.
  Okay.
  So here's where it gets pretty interesting. So
  one of the things that we did first off, after running the stepwise is that we assigned independent uniform inputs to each of the factors in the model.
  And this is a sort of Monte Carlo implementation in JMP.
  A different kind of Monte Carlo implementation and and and
  it's a what's what's what's important to understand in this particular framework is that the difference between the main effect and and the total effect can indicate the extent of interaction.
  hat that this the extent of interaction associated with a particular factor in the model. And so this is showing that in particular, water temperature and meat,
  in addition to being most explanatory in terms of total effect, may likely interact with other factors in this particular model.
  And what what you see, of course, is that we identified water temperature, meat, and and and the meat type as our top predictors, using the Paredo plot for transformed estimates.
  The other thing I'd like to highlight here before I launch into some of the other slides is the sensitivity indicate indicator that we can invoke here and
  under the profiler after we assign independent uniform inputs,
  we can colorize the profile profiler to indicate the strength of the relationship between each of the input input factors and the and the response. And we can also use
  the sensitivity indicator, which is represented by these purple triangles, to show us the sensitivity or the you can say the strength of the relationship
  similar to the linear regression coefficient would indicate the strength, where the taller the triangle and the steeper the relationship,
  the stronger either in the positive or the negative direction and the wider and flatter the triangle, the less strong that relationship that's that factor plays.
  Okay.
  So we went about reducing our model and using some engineering sense and using the stepwise platform.
  And what we get
  is a this is a just a snapshot of our model fit from the originating from the DSD design and it has RSM structured as curvature. And you can see this is an interaction plot which shows the extent of interaction among all the factors in a in a pairwise way.
  And we've indicated where some of the interactions are present and what those interactions look like.
  So this is a model that really, we can get more of a handle on
  Okay, so
  I think one other thing to mention is that the design constraint that we imposed in is is similar to what you might consider a mixture design, where all the components add together and the constraint has to sum to 100%.
  Okay, so here's just a high level picture of the profiler and we we can adjust or modulate each of the input factors and then
  observe the, the impact on the response and and we did this in a very manual way
  just to get gain some intuition into how the model is performing
  And of course to optimize our cooking time what we confirmed was that the time has to be faster, of course, the variants associated with the cooking time should be lower.
  And the throughput the throughput and the power savings should be optimized, maximized. And those are two additional
  responses that we derived based on cooking time.
  Okay, so here's where we get into the optimization more fully into that optimization
  of the of the cooking process. And so as I mentioned before, we designed or we created two additional response variables that are connected to the physics, where we have maximum throughput and that depends on in how many dumpling...
  I'm sorry, depends on how many dumplings, but also weight and time.
  And power savings, which is the which is the product of the power consumed and the time for cooking, which is an energy component.
  And so in order to engage in this optimization process,
  we need to introduce error associated with each of the input factors and that's represented by these distributions at the bottom here.
  And and we also need to consider that the practical HACCP control window and of course measurement device capability, which is something that we would like to look at in future studies.
  And so here's just a, a nice picture of the HACCP control plan that we use and this is follows very similar to something like a failure modes and effect analysis in the quality profession. And it's just a structured approach to
  experimentation or manufacturing process development and where key variables are identified,
  and key owners and then what criteria are being measured against and how that criteria being validated. And so HACCP is actually common in the food science industry and it stands for Hazard Analysis Critical Control Point monitoring.
  And I think
  in addition to all of these preparation activities, mainly I was involved in the context of this experiment as a data collector and and data integrity is a very important thing. And so
  transcribing data appropriately is is definitely is definitely important.
  So all the HACCP control points need to be incorporated into the Monte Carlo simulation range and ultimately the HACCP tolerance range can be used to derive the process performance requirement.
  Okay, so
  we consider a range of inputs where Monte Carlo can confirm that this expected range is practical for the cooking time.
  We want to consider a small change in each of the input factors at each at each level each at each HACCP component level and
  and this is determined by the control point range. Based on the control point range, we can determine the delta x and the delta in each of the inputs from the delta y response time.
  And we can continue to increase the delta x incrementally iteratively,
  while hoping that the the increase is small enough so that the change in y is small enough to meet the specification. And usually in industry that's a design tolerance and in this case, it's our HACCP control point control parameter range or control parameter limit.
  And if if that iterative procedure fails, then we have to make the increment and X smaller and we call this procedure tolerance allocation. Okay.
  We did this sort of manually and using sort of as our own special recipe. Although this can be done in a more automated way in JMP and
  and in this case, you can see we have more all of our responses. So using multiple response optimization, as well as which would involve using invoking the desirability functions and maximizing the desirability under the prediction profiler
  as well as a Gaussian process modeling,
  also available under the prediction profiler.
  Okay. So next in the vein of, you know, using tools to complement each other and try to understand further understand our product...our process and our experiment, we use the, the neural modeling capability under the analyze menu, under the prediction modeling tools.
  And
  We, we tried to utilize it to facilitate our prediction.
  This model uses a TanH function, which can be more powerful to detect curvature in non linear effects,
  but it's sort of a black box and it doesn't really tie back to the physics. So
  while it's also robust to non normal responses and somewhat robust to aliasing confounding.
  it, it has its limitations, particularly with a small sample size, such as that we have here, and you can actually see that the r squared between the training and validation sets are not particularly
  the same or they vary so this model isn't particularly consistent for the purposes of prediction.
  Finally, we used the the partition platform in JMP to run recursive partitioning on our
  response time response.
  And and this model is is definitely relatively easy to interpret in general, but I think particularly for our experiment because we can see that for the rising time we have this temperature cut off at about 85 degrees C,
  and that and as well as some temperature differentiation with respect to maximum throughput, but in particular is 85 degrees cut off is most...is very interesting.
  The R squared note in this model is about .7, at least with respect to the rising temp response, which is pretty good for this type of a model, considering the small sample size.
  And
  what's most interesting with respect the to this cut off is that below 85 C, the water really wasn't boiling. There wasn't
  much bubbling, no turbulence. The reading was very stable. However, as we increased the temperature, the water started to circulate, turbulence in the water caused non uniform temperature, cavitation bubble collapse, steam rising, and it's basically an unstable temperature environment.
  In this type of environment convection dominates rather than conduction.
  And steam also blocks light of the infrared thermometer, which also increases increases the uncertainty associated with the temperature measurement.
  And and the steam itself presents a burn risk which, in addition to safety, it may impact how the operator adjusts the thermometer and puts the the adjust the distance in which the operator places the thermometer, which is very important for accuracy of measurement.
  So, and this, in fact, was why we capped our design at 95 C because it was really impossible to measure water temperature accurately any more above that.
  Okay.
  So what are ...where have we arrived here? Well,
  in summary, we...in this experiment we use DSD (DOE) to collect the data only.
  Then we use stepwise regression to narrow down the important effects, but we didn't go too deep into the stepwise regression. And we use common sense to minimize the number of factors in our experiment as well as engineering expertise.
  We also use independent uniform inputs,
  which is very powerful
  for giving us an idea of the magnitude of effects using, for example by colorizing the profile or by looking at the rank of the effects and also by looking at the difference between the main effect and the total effect to give us an indication of interaction present in the model.
  We also added sensitivity indicators under the profiler to help us quantify our global sensitivity for the purposes of the Monte Carlo optimization
  schema that we employed.
  The main effects in the model really, temperature, of course, and physics really explained explains why temperature's the number one factor as, as I've shared in our findings.
  And in addition, from between 80-90 degrees C, what we see from the profilers that we observed sort of a rapid transition and an increase in the sensitivity of the relationship between rising time and temperature which is, of course, consistent with our experimental observations.
  Secondly, with respect to the the effects of factors interacting with each other and because there are two different types of physics really interacting, basic physics...physics modes interacting are convection and conduction,
  the stepwise on the DSD is a good starting point, because it gives us a continuous model with no transformation
  With no
  Advanced neural or black box type transformation, wo we can at least get a good handle on on global sensitivity to begin with.
  And our neural models in our partition models couldn't show us this, particularly given the small sample size in our experiment.
  And finally, we use Monte Carlo simulate, robust Monte Carlo simulation
  in our own framework. And we also did a little bit of multiple response optimization on rising time and throughput in power consumption versus our important factors. And through this experiment, we began to really qualify and further our understanding of the importance of
  the most important factors in this experiment using a multi disciplinary modeling approach.
  Finally, I will share some references here for you for of interest. Thank you very much for your time and
Comments

Now I'm craving dumplings.  Do you have a favorite dumpling recipe?

@tonya_mauldin  Check this one out! and an engaging article at that, with wonderful pictures:  https://medium.com/@Grimod/five-perfect-dumplings-be52003cf188