<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Inter and Intra Observer Reliability/Agreement with JMP in Discussions</title>
    <link>https://community.jmp.com/t5/Discussions/Inter-and-Intra-Observer-Reliability-Agreement-with-JMP/m-p/86247#M38454</link>
    <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am part of a study that is looking at 4 clinicians abilility to make a continuous variable measurement with 5 different techniques. We have a gold standard that is a direct measurement of the parameter of interest.&lt;/P&gt;&lt;P&gt;I have generated a delta (the actual measurement minus the clinicians measurement) for each data point.&lt;/P&gt;&lt;P&gt;I am trying to figure out how to first look at each clinicians intraobserver variability using each of the 5 techniques. I was thinking if I just using distribution and then getting a p value for the mean delta&amp;nbsp;(compared to mean of 0 if measurement was perfect) to assess each clinician's 'skill' at each technique.&lt;/P&gt;&lt;P&gt;is that apropriate?&lt;/P&gt;&lt;P&gt;Then I want to get a sense of intraobserver variability/agreement (are these the same thing?) for each clinician using each technique.&lt;/P&gt;&lt;P&gt;What test would I run to get that agreement (ICC, CCC?) and would i be able to do that in JMP?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;then I was thinking we could evaluate each technique across the 4 clincians to get the interobserver relaibility for each technique.&lt;/P&gt;&lt;P&gt;is that doable with JMP? Is that also ICC or CCC?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Finally, I was thinking about pooling all the clinicians together per each tehcnique and then measuring the agreeement between the 5 different techniques.&lt;/P&gt;&lt;P&gt;I would also like to compare each of tho technuiques against our gold standard ( the direct measurement) to get a sense of the 'best' technique.&lt;/P&gt;&lt;P&gt;is that doable in JMP?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Sorry this is a lot, any help is appreciated.&lt;/P&gt;&lt;P&gt;My stats knowlege is pretty slim&lt;/P&gt;</description>
    <pubDate>Wed, 19 Dec 2018 22:49:37 GMT</pubDate>
    <dc:creator>DavidI</dc:creator>
    <dc:date>2018-12-19T22:49:37Z</dc:date>
    <item>
      <title>Inter and Intra Observer Reliability/Agreement with JMP</title>
      <link>https://community.jmp.com/t5/Discussions/Inter-and-Intra-Observer-Reliability-Agreement-with-JMP/m-p/86247#M38454</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am part of a study that is looking at 4 clinicians abilility to make a continuous variable measurement with 5 different techniques. We have a gold standard that is a direct measurement of the parameter of interest.&lt;/P&gt;&lt;P&gt;I have generated a delta (the actual measurement minus the clinicians measurement) for each data point.&lt;/P&gt;&lt;P&gt;I am trying to figure out how to first look at each clinicians intraobserver variability using each of the 5 techniques. I was thinking if I just using distribution and then getting a p value for the mean delta&amp;nbsp;(compared to mean of 0 if measurement was perfect) to assess each clinician's 'skill' at each technique.&lt;/P&gt;&lt;P&gt;is that apropriate?&lt;/P&gt;&lt;P&gt;Then I want to get a sense of intraobserver variability/agreement (are these the same thing?) for each clinician using each technique.&lt;/P&gt;&lt;P&gt;What test would I run to get that agreement (ICC, CCC?) and would i be able to do that in JMP?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;then I was thinking we could evaluate each technique across the 4 clincians to get the interobserver relaibility for each technique.&lt;/P&gt;&lt;P&gt;is that doable with JMP? Is that also ICC or CCC?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Finally, I was thinking about pooling all the clinicians together per each tehcnique and then measuring the agreeement between the 5 different techniques.&lt;/P&gt;&lt;P&gt;I would also like to compare each of tho technuiques against our gold standard ( the direct measurement) to get a sense of the 'best' technique.&lt;/P&gt;&lt;P&gt;is that doable in JMP?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Sorry this is a lot, any help is appreciated.&lt;/P&gt;&lt;P&gt;My stats knowlege is pretty slim&lt;/P&gt;</description>
      <pubDate>Wed, 19 Dec 2018 22:49:37 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Inter-and-Intra-Observer-Reliability-Agreement-with-JMP/m-p/86247#M38454</guid>
      <dc:creator>DavidI</dc:creator>
      <dc:date>2018-12-19T22:49:37Z</dc:date>
    </item>
    <item>
      <title>Re: Inter and Intra Observer Reliability/Agreement with JMP</title>
      <link>https://community.jmp.com/t5/Discussions/Inter-and-Intra-Observer-Reliability-Agreement-with-JMP/m-p/86289#M38474</link>
      <description>&lt;P&gt;To help get started take a look at 'Help &amp;gt; Books &amp;gt; Quality and Process Methods', then review Chapter 9 ('Variability Gauge Charts') and/or Chapter 10 ('Attribute Gauge Charts').&lt;/P&gt;</description>
      <pubDate>Thu, 20 Dec 2018 17:07:11 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Inter-and-Intra-Observer-Reliability-Agreement-with-JMP/m-p/86289#M38474</guid>
      <dc:creator>ian_jmp</dc:creator>
      <dc:date>2018-12-20T17:07:11Z</dc:date>
    </item>
  </channel>
</rss>

