<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Failing Normality Test in Discussions</title>
    <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191242#M41024</link>
    <description>&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;After doing some research on the topic I decided to calculate the Ranks of each dataset and perform the Fit Y by X using the Ranks (continuous) as the Y values and the labels (nominal) of each dataset as the X. Then I proceeded by perfoming a test of equal variances--since equal variances are necessary for all non-parametric tests. After establishing my variances are equal (the original data is not only not normal but also fails the test for equal variances), I performed a Kruskal-Wallis Test and finished off with both the Steel and Dunn Post Hoc tests. At this point I think the statistical approach makes sense and the conclusion of the tests are in line with real life observations.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can anyone advice if the above methodology is the correct approach from the statistical point of view? And as final question, in terms of post hoc tests, what is considered to be standard practice by statisticians: Steel or Dunn? Both yield the same ranking and reach the same conclusion. I believe both correct for inflated Type I error when comparing pairs.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you so much for all the input!&lt;/P&gt;</description>
    <pubDate>Thu, 04 Apr 2019 03:22:22 GMT</pubDate>
    <dc:creator>garibay90</dc:creator>
    <dc:date>2019-04-04T03:22:22Z</dc:date>
    <item>
      <title>Failing Normality Test</title>
      <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/190841#M40992</link>
      <description>&lt;P&gt;Dear all,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am trying to perform a normality test on a series of datasets similar to the one shown in the figure attached, and later compare them using ANOVA. However, when I perform the normality test, I keep getting what I considered to be a ‘false negative.’ As you can see from the screenshot of the journal, the dataset as a whole resembles a normal distribution, even though there are only about 7-8 bins being populated. My best guess is that because there are ‘so many’ empty bins in between the populated ones, JMP is interpreting the dataset as multimodal. I would like it to perform the fit/test on the overall set as shown graphically.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;However JMP is interpreting the distribution of the data, it also affects the conclusions drawn from ANOVA. I would be very grateful if anyone could provide some insight as to how to perform the statistical analysis on these data (i.e. test each dataset for normality and then compare them using ANOVA).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Picture1.png" style="width: 999px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/16723iC1B4188600B0B495/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture1.png" alt="Picture1.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 02 Apr 2019 20:49:23 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/190841#M40992</guid>
      <dc:creator>garibay90</dc:creator>
      <dc:date>2019-04-02T20:49:23Z</dc:date>
    </item>
    <item>
      <title>Re: Failing Normality Test</title>
      <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/190852#M40993</link>
      <description>&lt;P&gt;You have a large sample so these tests are extremely sensitive to a departure from (perfect) normality.&amp;nbsp;The observations, though, at either end depart from linearity in the normal quantile plot, indicating that the distribution is right-skewed. Why do you think that there is no departure?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Is this example one of the samples to be analyzed in the ANOVA? How do the other samples look?&lt;/P&gt;</description>
      <pubDate>Tue, 02 Apr 2019 20:59:31 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/190852#M40993</guid>
      <dc:creator>Mark_Bailey</dc:creator>
      <dc:date>2019-04-02T20:59:31Z</dc:date>
    </item>
    <item>
      <title>Re: Failing Normality Test</title>
      <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/190854#M40995</link>
      <description>&lt;P&gt;Thank you for the quick response. While I agree the distribution is skewed, I do not expect it to completely fail the normality test given the set looks somewhat normaly distributed--but yes, I do not have a quantitative way to justify it at the moment. As I recall, I also tried less points in each bin and still got a failed normality test. The other datasets also look the same.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Basically, I'm trying to compare these sets and determine which ones are different when compared to an ideal case scenario. I'm open to other alternatives. ANOVA is the approach i'm familiar with.&lt;/P&gt;</description>
      <pubDate>Tue, 02 Apr 2019 21:19:45 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/190854#M40995</guid>
      <dc:creator>garibay90</dc:creator>
      <dc:date>2019-04-02T21:19:45Z</dc:date>
    </item>
    <item>
      <title>Re: Failing Normality Test</title>
      <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/190870#M40999</link>
      <description>&lt;P&gt;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/14426"&gt;@garibay90&lt;/a&gt;&amp;nbsp; your data may also be resolution limited as well,&amp;nbsp; as evidenced by the apparent "gaps" in between the bars in your bar chart, but even more clearly apparent in the "chattered" pattern of the points in your normal probability plot.&amp;nbsp; this data doesn't look particularly well behaved for ANOVA, unless this data set is comprised of multople groups (samples)!&amp;nbsp; In which case you need to run the normality assessment separately per group first.&amp;nbsp; And&amp;nbsp;&lt;EM&gt;then &lt;/EM&gt;you can run the ANOVA.&amp;nbsp; But even when you run the ANOVA, as mentioned, your analysis will be 'statistically biased' by your extremely large sample size.&amp;nbsp; Hypothesis tests in general (including ANOVA) have much higher power to reject the null hypothesis as sample size increases.&amp;nbsp; For ANOVA, you are testing the null hypothesis that the treatment offset for each group mean (to grand mean) is the same for all treatment groups.&amp;nbsp; If you reject the null, then all you can assert is that&amp;nbsp;&lt;EM&gt;at least one difference&lt;/EM&gt; in the treatment offset between groups is observed.&amp;nbsp; Note: another central assumption here is that the variance between the groups is the same.&amp;nbsp; You can verify this assumption by using Fit Y by X&amp;gt; Unequal Variances, and looking at the p-values associated with the hypothesis tests which JMP automatically produces.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Principally though, ANOVA is a test compariing the means between groups (taking into account the overall variance between groups, which is assumed to be equal from group to group).&lt;/P&gt;</description>
      <pubDate>Wed, 03 Apr 2019 05:59:52 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/190870#M40999</guid>
      <dc:creator>PatrickGiuliano</dc:creator>
      <dc:date>2019-04-03T05:59:52Z</dc:date>
    </item>
    <item>
      <title>Re: Failing Normality Test</title>
      <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/190906#M41005</link>
      <description>&lt;P&gt;Your data is partitioned into bins as shown by the normal probability plot. Did you create this binning arrangement or did the data come to you in this binned fashion? If you created the binning have you tried using the raw data, before binning?&lt;/P&gt;</description>
      <pubDate>Wed, 03 Apr 2019 14:09:34 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/190906#M41005</guid>
      <dc:creator>P_Bartell</dc:creator>
      <dc:date>2019-04-03T14:09:34Z</dc:date>
    </item>
    <item>
      <title>Re: Failing Normality Test</title>
      <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/190927#M41008</link>
      <description>&lt;P&gt;Thank you! Yes, I agree with what you are saying. This is only one dataset. I have mutiple ones like the shown in the figure.&lt;/P&gt;</description>
      <pubDate>Wed, 03 Apr 2019 14:22:39 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/190927#M41008</guid>
      <dc:creator>garibay90</dc:creator>
      <dc:date>2019-04-03T14:22:39Z</dc:date>
    </item>
    <item>
      <title>Re: Failing Normality Test</title>
      <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/190929#M41009</link>
      <description>&lt;P&gt;Data are not binned. It is actually a collection of numbers and JMP is doing the binning. Attached is another figure of the journal that shows the distribution of the populated bins clearly.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Picture2.png" style="width: 999px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/16730iA651AFE2289D886C/image-size/large?v=v2&amp;amp;px=999" role="button" title="Picture2.png" alt="Picture2.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 03 Apr 2019 14:28:44 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/190929#M41009</guid>
      <dc:creator>garibay90</dc:creator>
      <dc:date>2019-04-03T14:28:44Z</dc:date>
    </item>
    <item>
      <title>Re: Failing Normality Test</title>
      <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191008#M41013</link>
      <description>&lt;P&gt;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/14426"&gt;@garibay90&lt;/a&gt;&amp;nbsp; I agree with&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/14122"&gt;@P_Bartell&lt;/a&gt;&amp;nbsp;, but when you say "&lt;SPAN&gt;It is actually a collection of numbers and JMP is doing the binning" you are right but at the same time not completely right.&amp;nbsp; Yes the histogram does the automatic binning for you, and you can adjust that binning dynamically by using the "grabber" tool.&amp;nbsp; BUT the data is the data, and the data shows, at some 'reasonable' bin size - and quite strongly corroborated by the N-Q plot - that the data is binned, that is, in my parlance, is "partitioned" into separate "groups."&amp;nbsp; My word choice is probably not correct statistical language but hopefully we are on the same page.&amp;nbsp; I beleive what&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/14122"&gt;@P_Bartell&lt;/a&gt;&amp;nbsp;is saying is: you need to address the question of "why is your data structured like this?"&amp;nbsp; Answering this question is probably principally important to running ANOVA.&amp;nbsp; The nature and context of your data needs to be understood first, then you can apply various inferential statistical techniques to test various hypotheses (e.g. on the mean, the variance, etc).&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 03 Apr 2019 16:40:06 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191008#M41013</guid>
      <dc:creator>PatrickGiuliano</dc:creator>
      <dc:date>2019-04-03T16:40:06Z</dc:date>
    </item>
    <item>
      <title>Re: Failing Normality Test</title>
      <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191010#M41014</link>
      <description>&lt;P&gt;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/10483"&gt;@PatrickGiuliano&lt;/a&gt;&amp;nbsp;Thanks! Now I understand what you guys are getting at. In that case yes, the raw data generated are binned. Basically, my output is two columns: the bin and the counts. Not sure there is a way around this one. However, I went ahead and wrote a script to generate as many values of each bin as listed on the counts column. This is the set of numbers I am later inputting into JMP to perform the normality test. I was hoping JMP would generate a diffent set of bins so my distribution would appear/be interpreted as more continuous. I can attach an example dataset if needed. Thanks for all the input!&lt;/P&gt;</description>
      <pubDate>Wed, 03 Apr 2019 16:51:59 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191010#M41014</guid>
      <dc:creator>garibay90</dc:creator>
      <dc:date>2019-04-03T16:51:59Z</dc:date>
    </item>
    <item>
      <title>Re: Failing Normality Test</title>
      <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191012#M41015</link>
      <description>Without looking at your data, I'd suggest running your analysis on the raw&lt;BR /&gt;data first. Your sample size getting larger isnt helping you for the&lt;BR /&gt;normality test. Unless your sample size is extremely small, but not too&lt;BR /&gt;small, and your stockings about the date of your stimulating are valid for&lt;BR /&gt;your experiment, trying to simulate values won't help you for drawing valid&lt;BR /&gt;inferences from your hypothesis tests.&lt;BR /&gt;&lt;BR /&gt;I'd be happy to look at your data! Just make sure you remove anything that&lt;BR /&gt;might be considered proprietary. :)&lt;/img&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 03 Apr 2019 17:06:02 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191012#M41015</guid>
      <dc:creator>PatrickGiuliano</dc:creator>
      <dc:date>2019-04-03T17:06:02Z</dc:date>
    </item>
    <item>
      <title>Re: Failing Normality Test</title>
      <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191013#M41016</link>
      <description>&lt;P&gt;Thank you. Here is the example raw data and the data generated using the script.&lt;/P&gt;</description>
      <pubDate>Wed, 03 Apr 2019 17:18:07 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191013#M41016</guid>
      <dc:creator>garibay90</dc:creator>
      <dc:date>2019-04-03T17:18:07Z</dc:date>
    </item>
    <item>
      <title>Re: Failing Normality Test</title>
      <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191229#M41019</link>
      <description>&lt;P&gt;I would try graphing the data as well. Put the column shown in the Distribution on Y and what ever you are planning for the X in ANOVA on X in graph builder. If you notice the data clustering in both dimensions, then you do not have continous data and testing for normality does not make sense, nor does using ANOVA in Fit Model. It would be more likely that you would use Fit Model with a different distribution (using Generalized Linear Models) to perform your ANOVA.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Honestly it looks like the data is categorical and not continous. Kind of like a rating response.&lt;/P&gt;</description>
      <pubDate>Wed, 03 Apr 2019 20:49:48 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191229#M41019</guid>
      <dc:creator>Chris_Kirchberg</dc:creator>
      <dc:date>2019-04-03T20:49:48Z</dc:date>
    </item>
    <item>
      <title>Re: Failing Normality Test</title>
      <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191230#M41020</link>
      <description>&lt;P&gt;Thanks!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For the ANOVA portion I stacked the data and used the label for each data set as the X, with Y being each collection of numbers like the one I attached above. I was looking at the data a little more and Dunnett's Method looks promising, as the difference Matrix can tell me how different the datasets are. However, the R-squared listed under 'Summary of Fit' is still very low (0.002).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I also tried using Fit Model with a Generalized Linear Model personality, Poisson Distribution and Log link function but the Prob&amp;gt;ChiSq is still less than 0.05. Works the same if I use a Normal distribution.&lt;/P&gt;</description>
      <pubDate>Wed, 03 Apr 2019 21:44:24 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191230#M41020</guid>
      <dc:creator>garibay90</dc:creator>
      <dc:date>2019-04-03T21:44:24Z</dc:date>
    </item>
    <item>
      <title>Re: Failing Normality Test</title>
      <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191242#M41024</link>
      <description>&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;After doing some research on the topic I decided to calculate the Ranks of each dataset and perform the Fit Y by X using the Ranks (continuous) as the Y values and the labels (nominal) of each dataset as the X. Then I proceeded by perfoming a test of equal variances--since equal variances are necessary for all non-parametric tests. After establishing my variances are equal (the original data is not only not normal but also fails the test for equal variances), I performed a Kruskal-Wallis Test and finished off with both the Steel and Dunn Post Hoc tests. At this point I think the statistical approach makes sense and the conclusion of the tests are in line with real life observations.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can anyone advice if the above methodology is the correct approach from the statistical point of view? And as final question, in terms of post hoc tests, what is considered to be standard practice by statisticians: Steel or Dunn? Both yield the same ranking and reach the same conclusion. I believe both correct for inflated Type I error when comparing pairs.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you so much for all the input!&lt;/P&gt;</description>
      <pubDate>Thu, 04 Apr 2019 03:22:22 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191242#M41024</guid>
      <dc:creator>garibay90</dc:creator>
      <dc:date>2019-04-04T03:22:22Z</dc:date>
    </item>
    <item>
      <title>Re: Failing Normality Test</title>
      <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191249#M41025</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/14426"&gt;@garibay90&lt;/a&gt;&amp;nbsp;, not sure about the tests you are running, I haven't read much if anythign about them.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I tried fitting your "X" and "Y" columns of data&amp;nbsp; using the Life Distribution Platform and the the best JMP could come up with there strictly on the basis of Liklihood, AICc and BIC is Lognormal, and Zero Inflated Weibull, respectively.&amp;nbsp; Nether fit is good in my mind: use your eyes to look at the extent to which the points follow and non-linear coherent path on the transformed N-Q plots.&amp;nbsp; If the points don't fall pretty close to the straight line and/or exhibit a curvature, then the transformation is not a very good fit.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Based on my understanding, if you want to run a 2-group comparison using a standard normal parametric statistical hypothesis test (such as ANOVA = t-test for the 2-sample case, then you would need to apply the same transformation to normality for both groups. And the data has to be reasonably normal after you've applied that transformation to both.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;JMPs SHASH distribution may be a good fit for your "X" column, actually the best I could find (you can use Analyze Distribution &amp;gt; Fit All and JMP will generate a fit to this one for you).&amp;nbsp; But it won't give you a good fit for your "Y" column.&amp;nbsp; Actually in my experience, you will be very hard pressed to find&amp;nbsp;&lt;EM&gt;any transformation&amp;nbsp;&lt;/EM&gt;which will validly fit this data.&amp;nbsp; You have too many "0.0000" values in your dataset.&amp;nbsp; I've dealt with this issue before and in the end you really have no choice but to run a non-parametric analysis.&amp;nbsp; With data like "Y" I wouldn't trust the inferences I make from the non-parametric tests unless I was reasonably confident that the estimates I am trying to infer on are reasonably representative of the population from which the sample came.&amp;nbsp; For example, if I am running a Wilcoxon test which looks at the score difference between groups, can I rust that the confidence interval of the score mean difference is really representative of the population mean difference?&amp;nbsp; If I choose a test comparing the medians, do I have reason to think that the estimate of the median from my sample in "Y" and the corresponding confidence interval around the estimate, is representative of where the true estimate could be in the population?&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am not aware of a non-parametric test correction for extremely unequal variances.&amp;nbsp; But I could be wrong.&amp;nbsp; For the normal parametric tests, there is a standard correction for unequal variances, but the correction is based on an estimate and is not an exact solution. Look up the “behrens fisher problem” on Google.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you go with the SHASH distribution to transform your data (which I don't&amp;nbsp;think it works&amp;nbsp;particularly well at all for your “Y” data), then you get evidence of equal variance between groups.&amp;nbsp; You get evidence that there is a statistically significant difference between the means between groups, where the “Y” column mean is 0.62 units lower than the “X” column mean.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Untitled3.png" style="width: 999px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/16747i9DDB5421B2225FCF/image-size/large?v=v2&amp;amp;px=999" role="button" title="Untitled3.png" alt="Untitled3.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Just taking a step back here and looking at this groups, despite the fact that you have “unruly” data, its pretty clear to me that these groups are behaving numerically differently. &amp;nbsp;You are getting a bunch of 0 values for “Y” but not for your “X” group.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Untitled1.png" style="width: 658px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/16748i1F2CC31798B3771C/image-size/large?v=v2&amp;amp;px=999" role="button" title="Untitled1.png" alt="Untitled1.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Untitled2.png" style="width: 401px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/16749iF14DFAD66BCE5DCC/image-dimensions/401x1229?v=v2" width="401" height="1229" role="button" title="Untitled2.png" alt="Untitled2.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 04 Apr 2019 05:01:03 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191249#M41025</guid>
      <dc:creator>PatrickGiuliano</dc:creator>
      <dc:date>2019-04-04T05:01:03Z</dc:date>
    </item>
    <item>
      <title>Re: Failing Normality Test</title>
      <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191316#M41041</link>
      <description>&lt;P&gt;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/10483"&gt;@PatrickGiuliano&lt;/a&gt;&amp;nbsp;Thank you so much for taking the time to look at my data in so much detail. I checked and unfortunately I do not have the SHASH fit available--I'm working with JMP 10. However, I do agree that at this point a non-parametric approach for mean comparison is probably my best bet.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;"&lt;SPAN&gt;For example, if I am running a Wilcoxon test which looks at the score difference between groups, can I rust that the confidence interval of the score mean difference is really representative of the population mean difference?&amp;nbsp; If I choose a test comparing the medians, do I have reason to think that the estimate of the median from my sample in "Y" and the corresponding confidence interval around the estimate, is representative of where the true estimate could be in the population?" Good question, I am new to non-parametric tests so I'm not quite sure about this but as I understand it, what matters is the order of the ranks relative to each other and not so much the actual numerical value. If someone could clarify this that would be awesome.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;The two columns behave differently because the raw data is a histogram, where X (Column 1) are the bins and Y (Column 2) are the counts in each bin. Hence the amount of&amp;nbsp;"0.0000" values in the dataset.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Thank you so much for all your input!&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 04 Apr 2019 18:21:09 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191316#M41041</guid>
      <dc:creator>garibay90</dc:creator>
      <dc:date>2019-04-04T18:21:09Z</dc:date>
    </item>
    <item>
      <title>Re: Failing Normality Test</title>
      <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191344#M41043</link>
      <description>&lt;P&gt;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/5358"&gt;@Mark_Bailey&lt;/a&gt;&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/3911"&gt;@Chris_Kirchberg&lt;/a&gt;&amp;nbsp;Would you guys be able to verify the methodology above listed makes sense and is in line with standard statistical practices? Thank you! P.S. I'm referring to my post above Patrick's.&lt;/P&gt;</description>
      <pubDate>Thu, 04 Apr 2019 20:07:51 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191344#M41043</guid>
      <dc:creator>garibay90</dc:creator>
      <dc:date>2019-04-04T20:07:51Z</dc:date>
    </item>
    <item>
      <title>Re: Failing Normality Test</title>
      <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191357#M41048</link>
      <description>&lt;P&gt;I may not endorse, verify, or validate a method used by someone else.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I think that you will generally receive helpful suggestions here but you must decide if the the advice is appropriate in your case.&lt;/P&gt;</description>
      <pubDate>Thu, 04 Apr 2019 21:00:56 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191357#M41048</guid>
      <dc:creator>Mark_Bailey</dc:creator>
      <dc:date>2019-04-04T21:00:56Z</dc:date>
    </item>
    <item>
      <title>Re: Failing Normality Test</title>
      <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191358#M41049</link>
      <description>&lt;P&gt;I understand. Thank you!&lt;/P&gt;</description>
      <pubDate>Thu, 04 Apr 2019 21:16:42 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/191358#M41049</guid>
      <dc:creator>garibay90</dc:creator>
      <dc:date>2019-04-04T21:16:42Z</dc:date>
    </item>
    <item>
      <title>Re: Failing Normality Test</title>
      <link>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/210461#M42146</link>
      <description>&lt;P&gt;using &lt;A href="https://community.jmp.com/t5/user/viewprofilepage/user-id/14426" target="_blank"&gt;@garibay90&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;'s data,&amp;nbsp;&lt;/SPAN&gt;I used Fit Y by X instead of Graph Builder because the platform offers more options for coloring and changing the contour lines.&amp;nbsp; Also discovered this "Mesh Plot" option which looks like a Bivariate Histogram.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Graph of Y vs X.png" style="width: 857px;"&gt;&lt;img src="https://community.jmp.com/t5/image/serverpage/image-id/17421i372BC63A30B414CB/image-size/large?v=v2&amp;amp;px=999" role="button" title="Graph of Y vs X.png" alt="Graph of Y vs X.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 24 May 2019 06:34:09 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Failing-Normality-Test/m-p/210461#M42146</guid>
      <dc:creator>PatrickGiuliano</dc:creator>
      <dc:date>2019-05-24T06:34:09Z</dc:date>
    </item>
  </channel>
</rss>

