- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Report Inappropriate Content
Recordings DOE Club Q3 2024 - Avoid Pitfalls/ Mistakes using DSD/ Taguchi experiments
Question 2: How to avoid Pitfalls
- Chapters
- descriptions off, selected
- captions settings, opens captions settings dialog
- captions off, selected
- en (Main), selected
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
End of dialog window.
This is a modal window. This modal can be closed by pressing the Escape key or activating the close button.
Comments
1. Do not assume any conclusions/interpretations outside of your design space
2. And validate the model(s) with validation points!
Lesson 1: be brave when setting ranges.
Lesson 2 : there might be a lot of "good" models for your response(s). Choose a relevant criterion (or several ones) linked to your objective(s) to evaluate and select model(s).
Lesson 3: if you have ranges or combination to avoid use the disallowed combination filter.
I have a 4 minute video on how to do just that: https://youtu.be/0ILaduCaszE
I'd say another pitfall (particularly for beginners) is overthinking the DOE. Anything is better than one-factor-at-a-time. You can build on your DOE knowledge as you do more of them - you don't have to know everything to get started.
Question 3: Mistakes using Definitive Screening Designs (DSD)
- Chapters
- descriptions off, selected
- captions settings, opens captions settings dialog
- captions off, selected
- en (Main), selected
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
End of dialog window.
This is a modal window. This modal can be closed by pressing the Escape key or activating the close button.
Comments:
Do not try to learn everything in the first try. People trend to try to bring everything in in the first design. Most big designs do not work without a lot of prior knowledge.
Also think iteratively. It's very rare to have all responses and conclusions with only one design.
Not adding enough potential factors
Not Randomizing Runs: If the order of experimental runs is not randomized, lurking variables (such as time or temperature drift) may introduce bias into the results.
Here is an article from Bradley Jones about "Proper and Improper use of DSD": https://community.jmp.com/t5/JMP-Blog/Proper-and-improper-use-of-Definitive-Screening-Designs-DSDs/b...
Question 4: Moving from DoE to Taguchi experiments - robust design
- Chapters
- descriptions off, selected
- captions settings, opens captions settings dialog
- captions off, selected
- en (Main), selected
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
End of dialog window.
This is a modal window. This modal can be closed by pressing the Escape key or activating the close button.
Comments:
Depending on where/how you want to find robust process design, there may be better alternatives today with Optimal designs.
And if you have several 2-levels categorical factors, try using OML designs instead: Mixed-Level Screening Designs (jmp.com)
IMH, Taguchi is a less modern form of DoE, where you have to buy expensive books on stadardized DoE designs.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Report Inappropriate Content
Re: Recordings DOE Club Q3 2024 - Avoid Pitfalls/ Mistakes using DSD/ Taguchi experiments
Thanks for the upload and adding the comments @maria_astals !
Some complementary comments and informations from my side :
Question 2: How to avoid Pitfalls
Besides the excellent points mentioned in the discussion and comments, the article "Eight Keys to Successful DOE" from Quality Digest really summarize and explain well the most important aspects to consider when building designs :
- Set good objectives
- Measure responses quantitatively
- Replicate to dampen uncontrollable variation (noise)
- Randomize the run order
- Block out known sources of variation
- Know which effects (if any) will be aliased
- Do a sequential series of experiments
- Always confirm critical findings
Question 3: Mistakes using Definitive Screening Designs (DSD)
A specific analysis platform for DSD, The Fit Definitive Screening Platform, has been developed to leverage the specific foldover structure of the DSD to make the analysis easier and enable the detection of 2-factors interactions and quadratic effects following the principle of effect heredity. This analysis can be extended and used to any design with a foldover structure, not only for DSDs, see article from Bradley Jones :
However, even if this analysis method is very powerful, it may happen that some terms are not detected even if they may be active, due to low power for higher order effects and possible low effect size, example : Fit Definitive Screening vs. Stepwise (min. AICC) for model selection
Other platforms may be helpful like The Fit Two Level Screening Platform or Generalized Regression Models (with different estimation methods like Best Subset, Pruned/2-stage/ Forward selection, ...), as they can help you build different models, using different methods to detect active effects and estimate parameters.
As the saying goes "All models are wrong, some are useful", so the key is really to compare and evaluate different models to get a good understanding of the system. Different methods for selecting a model can yield different solutions. Choosing a relevant evaluation metric to compare the different models (depending on the objective), compare the different models and selecting and/or combining different modeling methods with domain expertise can help you have a broader and more precise view about what matters the most. And from then, you can plan your next experiments to refine your model, and/or prepare some validation points to assess your model performance.
(I have submitted an abstract for JMP Europe Discovery 2025 using a Mixed-Level Screening Design and the different modeling platforms mentioned for the analysis of the responses, I hope to be selected to present this platforms and models comparison).
Question 4: Moving from DoE to Taguchi experiments - robust design
There may be several definition of robustness and different ways to create a robustness DoE, depending against which factors you want to test the robustness of your experimental space/optimum point : against external noise factors, against variation in process/experiment factors, or against a combination of both factors. Here is a serie of articles from Stat-Ease explaining the type of designs according to which robustness situation you are interested in :
- Robustness against external noise factors : https://statease.com/blog/achieving-robust-processes-via-three-experiment-design-options-part-1/
- Robustness against variation in our set points for process factors : https://statease.com/blog/achieving-robust-processes-via-three-experiment-design-options-part-2/
- A combination of the first two types : https://statease.com/blog/achieving-robust-processes-via-three-experiment-design-options-part-3/
Some ressources that may be helpful as well :
Designing Robust Products (2020-US-30MP-539)
How to Design for Robust Production Processes | JMP
Hope these info may help JMP users, happy experimentations !
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Report Inappropriate Content
Re: Recordings DOE Club Q3 2024 - Avoid Pitfalls/ Mistakes using DSD/ Taguchi experiments
I thought I joined the club, but was not informed of the "meeting"?