Thank you Shampton82 for addressing my question. Your explanation matches what I am finding in some of other forums on GR&R.
The purpose of my inquiry is purely to find a clear explanation that explains why only one methodology, within the ANOVA approach to GR&R and backed by mathematics, can be used to define that a gauge is deemed capable. JMP's way of displaying both has been a concern of mine for a very long time as I have not seen a good argument to decisively choose one over the other. Having two answers adds confusion and leaves it up to the user to decide if the gauge is capable. One could just pick the better of the two numbers.
Having data in the same units as the measurement may apply to common sense, but does not really make mathematical sense.
Don Wheeler (reference below) makes a good case that these are just ratios, but cannot really be compared. Adding all percentages of variation gives more than 100% variation, as you mentioned. This does not seem physically possible if the total variation is 100%. To properly compare ratios, and indicate which one is a fraction of the total, it would be a % of 100, not a % of some number.
Using variance components preserves the mathematical integrity, since variance components add to 100%. Wheeler also states that using standard deviations does not properly account for measurement error, and variance components do.
This would lead me to conclude that comparing variance components when completing a GR&R using the ANOVA method is a more appropriate solution.
I would be interested in your and anyone else's ideas, as it seems there are a diversity of opinions, but no clear rigorous definition.
-Jens Riege
"Problems with Gauge R&R Studies: How to make sense of your R&R value" Donald Wheeler
https://www.spcpress.com/pdf/DJW223.pdf