This post is a couple years old, but I too would like to know how the sum of squares is calculated. I refer specifically to the Effect Tests report.
I went to a basic regression textbook to learn. The book is called "Design and Analysis of Experiments" by Douglas C. Montgomery, 6th Edition. In that book there is an example calculation. Example 5-1 is a full factorial experiment with two factors and four replications. I have entered the data into Excel, Minitab and JMP and have compared the results to Montgomery. Minitab and Excel Analysis Toolpak match Montgomery, JMP does not.
The calculations from the text are attached as an image. So are images from Minitab and Excel. Also JMP, with and without the interaction term. The JMP project files is attached, with data table and Fit Least Squares report.
Across all the analyses, the sums of squares match except for JMP's Sum of Squares for Material Type in Effects Test. Therefore, its p-value is also different, making it appear as if the factor is insignificant. How is this particular SS calculated?
In the JMP output, Sum of Squares for Model in the Analysis of Variance matches the sum of SS from the model terms in Montgomery, Minitab, and Excel. It is not equal to the sum of SS for the model terms in JMP's Effect Test report. Why is this?
When the interaction is removed from the model, then the Sum of Squares for Material Type is the same as in Montgomery. Why is that? Why does the inclusion of the interaction term affect the sum of squares for that particular main effect? The sum of Sum of Squares in the Effect Test is equal to the Sum of Squares for the Model term in the ANOVA table. Why does it work if the interaction is not in the model?
According to Montgomery's analysis, all three terms are significant (two main effects and interactions). According to JMP, one main effect and the interaction are significant. I really need to understand why JMP gets a different result for the significance of Material Type.