I am quite familiar with designing and executing crossed Gauge RR studies within JMP. Recently, I was asked to perform a nested study as the process under observation is destructive. For background, we are trying to establish the effectiveness of an automated inspection system tasked with measuring the size of nanoliter volume spots applied to a piece of plastic. The volume of fluid is so low that the device must be inspected within a set timeframe or else there is evaporative loss. Now, the key issue where I seem to be having some difficulty, is that I want to generate parts at some process extremums to show that regardless of the spot size, the system provides repeatable and reproducible results. For example, I have data such that the following occurred:
Generated 135 devices.
The group of devices was split into 3 groups of 45. Each group of 45 shall be measured by the automated inspection system with 3 different lighting conditions (low, nominal, high).
Each group of 45 is divided into 5 groups of 9 devices where each device was at a different point in the process specification limit i.e. Out of tolerance small size, small size, nominal size, large size, out of tolerance large size.
So, when I populate the Gauge RR dialog in JMP I do so in a method that is similar to a crossed study i.e. Response = Spot Size, Grouping = Lighting condition, Sample ID = specification limit setting. If I do this, and execute a Nested model analysis, JMP reports a terrible precision to tolerance ratio because it seemingly makes the assumption that all parts should have been collected at the same specification limit.
I realize that much of what I just described can still be assessed if I employ the crossed model... But this doesn't seem like it should be necessary. I imagine that a typical use case for this platform would be to have several operators look at a series of parts from several batches that cover the process range. Fundamentally, can you only perform a nested Gauge study assuming all parts are homogeneous regardless of batch? Sorry if this is more of a broad statistical question than a JMP specific question.
When doing a destructive nested MSA design you do need to make sure that all "parts" that are being considered as the same part (at the Part/Sample ID level) are homogenous. So, if I am understanding your design correctly it seems you have 9 parts for each Part/Sample ID. Those 9 parts need to be homogenous. The analysis is treating them as if they are the same part, even though that is not possible with destructive testing.
You also need to make sure that there is consistent variation for the 3 lighting conditions. That can be assessed by looking at the standard deviation chart and using the Homogeneity of Variance test.
Also, since you are measuring over a range of sizes you might additionally consider doing a linearity study to see if there are measurement biases based on the size of the part.