cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Try the Materials Informatics Toolkit, which is designed to easily handle SMILES data. This and other helpful add-ins are available in the JMP® Marketplace
Choose Language Hide Translation Bar

How to account for sampling variability when comparing outputs of two process

I have a powder batch A from which I am drawing two samples and feeding into two different methods (A and B) of a process to determine if both methods produce the same output. However, I realize since the powder has a particle size distribution, it would mean the samples have a subset of that distribution and these sampler powders are the input for methods A and B. So when I compare the output of method A and method B, how do I take into account the probable different in distribution of the inputs? Do I take the batch A PSD mean and std deviation and somehow adjust the output powder PSD mean and std deviation with those values? 

 

I have a powder batch (Batch A) with a known particle size distribution (PSD). From this batch, I draw two separate samples to feed into two different methods (Method A and Method B) of a process. I want to compare the output of both methods to see if they produce the powder output PSD result.

However, since the input powder batch A has a particle size distribution (range of particle sizes), each sample I take will only represent a subset of the overall distribution, which might affect the output results of both methods. Given this, how should I account for the differences in particle size between the samples when comparing the outputs of Methods A and B?

Should I adjust the results by considering the overall PSD of the batch (i.e., the mean and standard deviation of the particle size distribution) and somehow factor those into the output PSDs of the two methods?

1 REPLY 1
Victor_G
Super User

Re: How to account for sampling variability when comparing outputs of two process

Hi @StratifiedGnu91,

 

Happy new Year 2025 !

 

Your question involves a lot of reflexion on the domain expertise side, and some considerations on the statistical side.

Here are some reflexions :

  • Are the methods and instrumental measurements assessed and monitored ? Are the measurements consistent and checked regularly with a standard ?
    I would first make sure that measurement and methods are reliable, and that any variation coming from these measurements is random/not attributable to any noise/factors variation. Doing an MSA study can help assess the repeatability/reproducibility of the equipment and measurement method, create a monitoring control chart for detecting shifts or drifts in the measurement process, and help evaluating measurement variation, so that you can use this number to evaluate and compare measurements coming from the two methods.

  • What is the process for creating batches of powder for the supplier ? Can you have representative samples at different production time, involving different machines/production equipment, covering the whole specifications range ? 
    This is something to discuss with the supplier, to make sure the quantity and quality of the samples are representative of past and future productions samples. It's best to engage in a discussion with the supplier to be sure they can provide different samples coming from different sources, equipments, and with different results covering the specification range, to have lower and higher specification values investigated, and verify the accordance of this specification range with your products specifications and measurement capabilities.

  • What is your objective and precision needed for evaluating the overall measurement variability ?
    From your context description, it's hard to tell if you want to evaluate if the two methods are similar (for example through Equivalence Tests) or if you want to show that the methods are different (through statistical testing), and on which basis : average value or variance (or both) ? Also what is the overall measuremennt difference you can consider as equivalent/different for the methods and samples studied ? And what is the precision you want for this analysis, that can inform/guide the number of samples to collect and measure ? You can also use Sample Size Explorers like Margin of Error for Two Independent Sample Means or Margin of Error for Two Independent Sample Variances, depending if you're more interested in a difference of means or variances between the two methods to assess the quantity of samples required to detect a difference between the methods.

For the statistical part, there may be several options to analyze your results :

  • From a global point of view (overall measurement variability), I think the platform Matched Pairs Analysis may help compare the methods A and B on the same samples (if this option is possible, for non-destructive measurements). This platform can help you realize equivalence tests for the same samples. There are also other options of (parametric and non-parametric) statistical tests (on means and variances) in the platform Fit Y by X, Oneway Analysis.
  • For your question regarding the outputs of the two methods regarding the particle size of the two batches A and B, I think the Simulator could also help you visualize and simulate the variability of your complete measurement process. If you have a model linking your particle size to the measurement response, you could specify distributions of particle size for your particle size factor, and add any measurement variation from your response of interest. 
    Here is an example on Piepel dataset from JMP, where I added random mixture uniform variation on the mixture factors and a random noise on the measured Y response to simulate the overall variability in this optimal area : 
    Victor_G_0-1736166151186.png

    Provided you have a model linking your Y measured response for each method and your particle size, you can specify a particle size distribution and add a random noise on your measured response to create a table where you can further visualize and analyze the difference between the methods.

 

Hope this discussion starter may help you,  

 

Victor GUILLER

"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)