Hi, am a bit stuck, as visually my graph suggests that there is a significant difference between my control group and the group in red (when using standard error), however performance of Mean/Anova and Dunnetts post hoc tests state that there is no significance (although it is close).
When graph is altered with error bars for standard deviation there is no more significance, however I would like to use the standard error function.
Or is there a statistical explanation for this that i don't quite understand, in which case i would greatly appreciate the explanation.
(Sorry the delay!)
Your example is a classic case of 'test enough null hypotheses and you will find a significant comparison!"
The more comparisons that you make in the same data (A vs B, A vs C, et cetera), the more likely you will make a type I error. The bar chart represents the individual, separate comparisons that do not protect you. The bar chart comparison is naive misleading. It isn't wrong, it just can't answer the more complex questions about all the comparisons.
On the other hand, the Compare Means procedures, such as Dunnett's test, adjust the criterion for the number of comparisons to protect you. That is, if you want 95% confidence across all the comparisons, not just with each individual comparison, then you need adjustment. The effect of this protection is that a larger difference is necessary for the comparison to be significant. You should believe the Dunnett's comparisons.
Consider this 'gedanken experiment:' Generate random data under the null hypothesis for a large number of groups, test the hypothesis with both the bar chart approach (same as Compare Means > Student t) and the Dunnett's test. The number of type I errors (false positive) is much higher with the chart but what you would expect with Dunnet's test.