cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Discussions

Solve problems, and share tips and tricks with other JMP users.
Choose Language Hide Translation Bar
Upa_Upitas
Level I

How to compare the results between two DoE/Response Surfaces?

Hey, 

I have been characterizing different reactors using DoE to establish the correlation/equation between identical one response y and two independent parameters x1, x2. The DoE and data analysis for each reactor were not designed with JMP but in Python. 

 

Now, I would like to get the correlation on one reactor and compare statistically the correlation with other, assessing differences and similarities..., with JMP. I though on overlapping the surfaces plots, for instance, but would like to explore JMP capabilities. The Python generated the tables for each reactor on x1, x2 & y. I couldn´t find any tutorial to develop such a comparison given that the DoE wasn´t initially designed in JMP. I have the results directly. 

How could I input the data into JMP and perform the comparison?  

Thank you

4 REPLIES 4
P_Bartell
Level VIII

Re: How to compare the results between two DoE/Response Surfaces?

Let's start with how the response data was actually collected because in large measure this will help determine the suggested analysis pathways. Was the data collected in the context of a designed experiment? You cited using Python rather than JMP to design the experiment. Not sure what that means. How was 'reactor' treated as a factor type in the design, if treated at all? Outright classification factor unto itself? Or a blocking factor? What other nuisance variables might be in play between the reactors? For example, were the same raw materials used in each reactor? We used to run experiments on a 'reactor' half a world away from the other, with the idea of trying to compare the 'reactor' effect...and there was no way we had identical raw materials, let alone, operators, measurement systems and on and on.

Was the data 'happenstance data'? In other words, did you just collect manufacturing data from multiple process runs and are now trying to torture information/insight out of it? Commonplace in manufacturing data...hey, the data is almost 'free'! We might as well try and see what we can see!

All these issues will lead to selection of appropriate analysis pathways. But regardless of the answers to the above questions...my advice wrt to analysis is start with simple visualizations of the data that help answer the practical questions at hand. There is always the temptation to jump right to modeling of some sort and ignoring/skipping over JMP's more simple visualization platforms like Graph Builder, Distribution, and Fit Y by X.

Last 'ask'...can you share your data set...even if it's anonymized? 

Upa_Upitas
Level I

Re: How to compare the results between two DoE/Response Surfaces?

Hey Bartell, 

Thank you for answering. I agree on keeping the comparison simple. Maybe to start, I would like to learn how to overlap response surfaces between Reactor 1 (R1) and Reactor 2 (R2) to visualize and asses the response space upon x1 and x2. Maybe next to perform a t-student analysis to asses the difference between y in R1 to R2.  

Regarding your questions, R1 from Site A and the R2 from Site B (same volume, temperature) where analyzed on the same design space using a DoE methodology. x1 and x2 are in similar ranges. Obviously, since the experimentation was done in two different factories, there are many things not completely identical. There are central runs. Each characterization has 6 to 9 runs. 

Since the design of the DoE and the response values calculation was done outside JMP, I would like to use your software to do the final step of comparing the results. 

My purpose is to assess the differences in the response  between Site A and B.

Thank you for any input in this matter :) 

 

 

P_Bartell
Level VIII

Re: How to compare the results between two DoE/Response Surfaces?

Lots of options and workflows within JMP for this type problem and data. You've told me nothing so far that leads away from doing what I recommend in my second to last paragraph in my initial reply. This will allow you to gain insight with respect to your practical questions BEFORE any modeling work to create response surfaces. At a minimum I suggest exploring these 5 issues:

1. Where's the middle of the responses for each 'x'?

2. What's the shape of the responses for each 'x'?

3. How spread out are the responses for each 'x'?

4. Plot the responses in order of experimental execution.

5. Are there any response data that look suspicious, odd, unexpected, or unusual that might make subsequent modeling more problematic? If so, how will you handle them in subsequent analysis and reporting?

A simple run chart (for issue #4 and the Fit Y by X platform for the rest are your friends here. Graph Builder could work too.

Once you get to modeling, and I presume you have 'useful' models, the JMP Prediction and Contour Profilers will be invaluable for additional response surface visualization and comparison.

statman
Super User

Re: How to compare the results between two DoE/Response Surfaces?

I'm a bit confused...why don't you use "your" software to do the comparison. ("Since the design of the DoE and the response values calculation was done outside JMP, I would like to use your software to do the final step of comparing the results.").  If you add the data table, it is much easier to provide advice.

 

Also, this can't be correct: "Regarding your questions, R1 from Site A and the R2 from Site B (same volume, temperature) where analyzed on the same design space".  No way they could be the same!  Different raw materials, ambient conditions set-ups, sensors, measurement systems, etc.  I also have run a number of experiments across multiple reactors.  Reactor is not a factor, but likely confounded with block.  Analyze at the block averages. You can add block to the model (and block by factor interactions).  I have often found factor effects (X1 & X2 in your case) dependent on reactor (a block by factor interaction).

"All models are wrong, some are useful" G.E.P. Box

Recommended Articles