Choose Language Hide Translation Bar
Level I

Inter and Intra Observer Reliability/Agreement with JMP



I am part of a study that is looking at 4 clinicians abilility to make a continuous variable measurement with 5 different techniques. We have a gold standard that is a direct measurement of the parameter of interest.

I have generated a delta (the actual measurement minus the clinicians measurement) for each data point.

I am trying to figure out how to first look at each clinicians intraobserver variability using each of the 5 techniques. I was thinking if I just using distribution and then getting a p value for the mean delta (compared to mean of 0 if measurement was perfect) to assess each clinician's 'skill' at each technique.

is that apropriate?

Then I want to get a sense of intraobserver variability/agreement (are these the same thing?) for each clinician using each technique.

What test would I run to get that agreement (ICC, CCC?) and would i be able to do that in JMP?


then I was thinking we could evaluate each technique across the 4 clincians to get the interobserver relaibility for each technique.

is that doable with JMP? Is that also ICC or CCC?


Finally, I was thinking about pooling all the clinicians together per each tehcnique and then measuring the agreeement between the 5 different techniques.

I would also like to compare each of tho technuiques against our gold standard ( the direct measurement) to get a sense of the 'best' technique.

is that doable in JMP?


Sorry this is a lot, any help is appreciated.

My stats knowlege is pretty slim


Re: Inter and Intra Observer Reliability/Agreement with JMP

To help get started take a look at 'Help > Books > Quality and Process Methods', then review Chapter 9 ('Variability Gauge Charts') and/or Chapter 10 ('Attribute Gauge Charts').

Article Labels

    There are no labels assigned to this post.