turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- JMP User Community
- :
- Discussions
- :
- Discussions
- :
- discrepancy between graph and dunnetts test for significance

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Dec 9, 2016 4:18 PM
(1589 views)

Hi, am a bit stuck, as visually my graph suggests that there is a significant difference between my control group and the group in red (when using standard error), however performance of Mean/Anova and Dunnetts post hoc tests state that there is no significance (although it is close).

When graph is altered with error bars for standard deviation there is no more significance, however I would like to use the standard error function.

Or is there a statistical explanation for this that i don't quite understand, in which case i would greatly appreciate the explanation.

2 REPLIES

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Dec 13, 2016 5:49 PM
(1499 views)
| Posted in reply to message from Aras_Christine 12/09/2016 07:18 PM

(Sorry the delay!)

Your example is a classic case of 'test enough null hypotheses and you will find a significant comparison!"

The more comparisons that you make in the same data (A vs B, A vs C, et cetera), the more likely you will make a type I error. The bar chart represents the individual, separate comparisons that do not protect you. The bar chart comparison is naive misleading. It isn't wrong, it just can't answer the more complex questions about all the comparisons.

On the other hand, the Compare Means procedures, such as Dunnett's test, adjust the criterion for the number of comparisons to protect you. That is, if you want 95% confidence across **all** the comparisons, not just with each individual comparison, then you need adjustment. The effect of this protection is that a larger difference is necessary for the comparison to be significant. You should believe the Dunnett's comparisons.

Consider this 'gedanken experiment:' Generate random data under the null hypothesis for a large number of groups, test the hypothesis with both the bar chart approach (same as Compare Means > Student t) and the Dunnett's test. The number of type I errors (false positive) is much higher with the chart but what you would expect with Dunnet's test.

Learn it once, use it forever!

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Thank you for the explanation! It definetly helps