Do you need to simulate the results of your model to help determine how to measure and reduce defect rates? Do you aim to compare the impact of changing variable values on one or more response goals, such as cost and yield? Could your team find value in understanding the impact of factor limits on the expected defect rate of future runs and the passing specifications (in-spec portion) of the responses?
In this session using a simple 4-factor, 2-response manufacturing example, you see how to:
- Interactively determine how your prediction model changes as you change settings of individual factors.
- Find and examine tradeoffs for optimal (desirable) factor settings.
- Define and optimize desirability for one or more responses.
- Perform complex response optimization to handle completing criteria and by specifying unequal weights.
- Simulate the distribution of model outputs as a function of the random variation in the model inputs and noise.
- Save simulated values for further evaluation, including values that indicate if a simulated response is within the specification limits.
- Explore the impact of factor limits on the expected rate of future runs passing response specifications (in-spec portion).
Questions answered by @Laura_Higgins, @DonnyKopp and @PatrickGiuliano at the live webinar:
Q: Can you filter the simulated values using a formula, so that you could enter contraints for the X's?
A: Yes, if you have a formula in your JMP table or you can add your own to the formula.. See Video on formulas.
Q: Are there recommendations for sampling of continuous process data to build your model from? Like how often is to often versus not and best ways of capturing process lag in your model?
A: It definitely depends. Also, JMP has some great tools to help you sample your process intelligently. For example, you can use Tables > Subset to pull out meaningful "intervals" (subsets) of rows, so you don't muddy up your sampling process. Imagine sampling completely at random from a process that has many lots and persistent lot to to lot variation (mean shifts that are considered expected). If you sample totally at random from that process, your sample will not behave rationally (e.g. it won't look reasonably consistent and it may not fit any particular distributional model especially a 'physically based' one like Normal or Weibull).
Q: Is sensitivity analysis (how much input variation affects output variation) called something else in JMP? How best to see what predictors don't need as much control?
A: Start by searching for "Stochastic Optimization" inside the JMP support documentation.
Suggested Prerequisites:
Resources