cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Try the Materials Informatics Toolkit, which is designed to easily handle SMILES data. This and other helpful add-ins are available in the JMP® Marketplace
Choose Language Hide Translation Bar
Donald46
Level II

Best JMP function/methodlogy to test whether a test method is capable at distinguishing performance between two Products

Hi JMP Community,

 

This is a tricky one but I wasn't sure who else to ask!

 

I've got a passion project to do some work to develop a new test method. I do a test and measure a result between 1-100 (continuous response). Different products will score differently. A product that does really well might get 20, and a product that's not so great might get 70 or 80. This leads to two main questions:

 

1. What analysis can I do on JMP to show that the test is repeatable and reproduce?

2. How can I find out the resolution of the test (for a given confidence interval). E.g. what can I do to make me confident that the test can distinguish a result of 50 is better than a result of 60?

 

I've been going down the rabbit hole of MSA & EMP analysis/probably error but I'm not sure if this is the right application.

 

Thank you so much for any ideas.

 

 

1 ACCEPTED SOLUTION

Accepted Solutions
statman
Super User

Re: Best JMP function/methodlogy to test whether a test method is capable at distinguishing performance between two Products

Mark provided a link for you, and I have some additional thoughts/questions for you to ponder:

1. "if you cannot measure you cannot improve" (Lord Kelvin), so your desire to develop a measurement system is important.

2. You suggest the measure is continuous but bounded by 1 and 100. This is typically not considered continuous, but perhaps discrete and bounded at that.  You also only give examples that are 10, 20, 30, etc.  What is the measurement unit (smallest increment of change reported by the measurement system? if it is 10, then you have 10 categories.

3. What is meant by the phrase "does really well" or "better" mean?  These are not operationally defined.

4. Repeatability and reproducibility are the two components of measurement precision assessing the variability of the system (under certain conditions).  There are other aspects...stability, accuracy, bias and also your last point, discrimination assessing the effective resolution of the measurement system.  These can all be assessed with proper sampling of the measurement system (EMP, MSE, MSA...whatever you call it) in the appropriate inference space (compared to the sources of variation you wish to distinguish).

5. What do you mean by "I've been going down the rabbit hole of MSA & EMP analysis/probably error"?  What rabbit hole?  EMP provides good guidance for evaluating measurement systems.  I think one of the bigger challenges is to recognize  that decisions about the adequacy of your measurement system depend on what sources of variation you want to measure.  How your samples represent those sources in your study can have a huge effect on your conclusions.

 

"All models are wrong, some are useful" G.E.P. Box

View solution in original post

5 REPLIES 5

Re: Best JMP function/methodlogy to test whether a test method is capable at distinguishing performance between two Products

statman
Super User

Re: Best JMP function/methodlogy to test whether a test method is capable at distinguishing performance between two Products

Mark provided a link for you, and I have some additional thoughts/questions for you to ponder:

1. "if you cannot measure you cannot improve" (Lord Kelvin), so your desire to develop a measurement system is important.

2. You suggest the measure is continuous but bounded by 1 and 100. This is typically not considered continuous, but perhaps discrete and bounded at that.  You also only give examples that are 10, 20, 30, etc.  What is the measurement unit (smallest increment of change reported by the measurement system? if it is 10, then you have 10 categories.

3. What is meant by the phrase "does really well" or "better" mean?  These are not operationally defined.

4. Repeatability and reproducibility are the two components of measurement precision assessing the variability of the system (under certain conditions).  There are other aspects...stability, accuracy, bias and also your last point, discrimination assessing the effective resolution of the measurement system.  These can all be assessed with proper sampling of the measurement system (EMP, MSE, MSA...whatever you call it) in the appropriate inference space (compared to the sources of variation you wish to distinguish).

5. What do you mean by "I've been going down the rabbit hole of MSA & EMP analysis/probably error"?  What rabbit hole?  EMP provides good guidance for evaluating measurement systems.  I think one of the bigger challenges is to recognize  that decisions about the adequacy of your measurement system depend on what sources of variation you want to measure.  How your samples represent those sources in your study can have a huge effect on your conclusions.

 

"All models are wrong, some are useful" G.E.P. Box
Donald46
Level II

Re: Best JMP function/methodlogy to test whether a test method is capable at distinguishing performance between two Products

Thank you both for keeping me honest and pointing me in the right direction. Let me try and expand on each point

 

1. Amen

2. Sorry I did not elaborate on this. The process measures the weight loss by a sample, and the weight loss cannot exceed 100mg (because there is only 100mg to lose). The smallest measurement unit is 0.1mg. Therefore is it more accurate to describe the system as having 1000 categories?

3. A better definition might be, a sample representative of premium performance in the market might observe a score of 20-30 (less weight loss) and a sample representative of budget performance in the market might commonly score 70-80. This is still not a very good definition, maybe I should let go of this distinction.

4. Thank you I will look closer at an EMP appoach.  It looks like I was trying to get to a shortcut of evaluating discrimination, but a more holistic approach is probably required.

 5. This was an emotional response haha. I was struggling with translating factory-focused language of MSA/EMP into a laboratory experiment context. This is the right place to look.

 

"I think one of the bigger challenges is to recognize  that decisions about the adequacy of your measurement system depend on what sources of variation you want to measure.  How your samples represent those sources in your study can have a huge effect on your conclusions." This resonates strongly with what I need to achieve. It looks like I need to identify samples for the EMP that represent the kinds of variation that I want to measure. Thank you for any other guidance you can provide.

Re: Best JMP function/methodlogy to test whether a test method is capable at distinguishing performance between two Products

I want to add just a couple of comments.

  • MSA, by any name or method, is very welled defined at this point. Three is no need to 're-invent the wheel,' though, there will always be improvements. it is vitally important.
  • I would not think in categories. You have a continuous measure, not an attribute measure. This distinction is important. All quantitative measures have limitations, but they are still continuous.
  • The point of MSA is to separate the variation in the product and the variation in its measurements.
  • There are different approaches to sampling. They all work, but they don't all work in all situations. So learn which one works for your situation.
statman
Super User

Re: Best JMP function/methodlogy to test whether a test method is capable at distinguishing performance between two Products

OK, thanks for elaborating, you do indeed appear to have a continuous measurement system.

"All models are wrong, some are useful" G.E.P. Box