I'm sure I'm missing something basic, but I can't seem to find a way to conduct a standard t-test against a single control when there are many other samples to compare with. All pairwise comparison's is possible, but that results in too many comparison's that are not of interest. Comparison with a control using Dunnett's test is another possibility but my ultimate goal is to perform standard t-tests and then adjust the p-values through FDR. Any ideas?
It sounds like you might be looking for the Response Screening platform. in v13 is under Analyze/Screening. It reports FDR for you too. Really convenient for when you have several thousand responses and want to do comparisons between two levels of some input factor.
Thanks, but this isn't quite what I'm looking for. I am interested in screening, but not across many responses (like gene expression) but across many samples with only one response each. This is a typical use case for high throughput screening labs and I'm somewhat suprised to see the functionality isn't obvious. But I'm also quite a poor observer so I'm probably missing it :)
Based on the way you have your data set up, Dunnet's with control in Fit Y by X is your only option. Why would you not want to use Dunnet's with control? That option seems like the most appropriate option for your data. How many comparisons are you making where you think an FDR would be needed?
Great, thanks for confirming that I wasn't missing something. FDR is the prefered method in high throughput screening labs. My understanding is that Dunnett's test controls the family wise error rate, which is often too conservative.
Yes, it is true that Dunnet's does look at the family-wise error rate and hold it below alpha when making the comparisons. But so does Tukey, Bonferroni, and others. It is likely more conservative than Benjamini-Hochberg FDR, but less conservative than Bonferroni. I think it also matters how many comparisons are being made. I think with a large number of comparisons, Benjamini-Hochberg FDR seems reasonable and not so aggressive from a screening point of view, but for a small number of comparisons, maybe a little aggressive as well.
As for using Student's t-Test for the compare means option, here is some tricks you can use to get what you want.
One trick that may help you if you want to use student's t-test and only look at the control comparisons. At the bottom of the report, look for the ordered differences report. You can right click on it and choose Make into a Data Table. There, you can sort by level and remove the rows that do not have the comparisons you want. Here is another trick, if you use the value ordering property of a column, you can have the control level on the top of the list. This will help with the ordered difference report so that you can more easily get to the comparisons with the control.
To this new table, you can then create a new columns with the FDR formula so you can correct the p-values.
In the end, there will be those comparisons that are on the edge of your decision line (cutoff) and those should probably be included anyway. It is screening, so have a few more doesn't hurt. That p-value and cutoff line are a nice guide to narrow down what to focus on, but shouldn't be used to rigidly for screening purposes. There has to be practical considerations included in the decision making process as well (how many of the comparisons can be practically followed up on)
Hope this helps.
Thanks Chris for your comprehensive, thoughtful reply. We'll be testing thousands of samples so all pairwise comparisons would get pretty big. Instead, we can get what we need from the lsmeans table, since it contains the standard errors. I wrote a little JSL script to use the information in the lsmeans table along with error degrees of freedom to do the t-tests and apply the FDR. Packaging it as a plugin seems to work well. Would be nice if the raw t-tests with a single comparator (control) were available as a standard function in the future.