cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
The Discovery Summit 2025 Call for Content is open! Submit an abstract today to present at our premier analytics conference.
Choose Language Hide Translation Bar
View Original Published Thread

The "effect amount" display function added in recent version upgrades: its purpose and how to use it

In recent upgrades to JMP, the latest version, JMP 18, can output the effect size "Cohen's d" in t-tests, while JMP 17 and later can output the effect sizes " η2 " (eta squared) and " ω2 " (omega squared) in regression analysis.


undefined


The effect size is an index that shows the magnitude of the influence that an explanatory variable (factor) has on a target variable in a statistical model. The p-value is an index that shows whether a factor has an effect on a target variable (not by chance), but this is used to determine statistical significance and does not indicate the size of the effect.


JMP displays the following "Summary of Effects" as a report of "Model Fit". In this report, the p-values ​​are arranged in ascending order (in descending order of log-value), but this does not directly indicate that the effects are larger in the order of sex, drug, and sex*drug. It merely indicates the magnitude of statistical significance.


undefined


In this blog, we will explain the meaning of effect size, how to output it in JMP, the calculation formula, and guidelines for the indicators.


Effect size of t-test: "Cohen's d"

Cohen's d statistic is used as the effect size for a t-test, i.e. when comparing means between two groups, assuming equal variances.


Explanation using simulated data

Create simulated data to compare the blood glucose levels (mg/dl) of two groups (A, B).


Group A: A random sample drawn from N(105, 15 2 )

Group B: A random sample drawn from N(110, 15 2 )


N(μ, σ 2 ) denotes a normal distribution with mean μ and standard deviation σ.


In this case, the results of performing a t-test on 50 samples extracted from each of groups A and B (left figure) and the results of performing a t-test on 250 samples extracted from each of groups A and B (right figure) are shown.


undefined


Looking at the two-sided p-values ​​for both reports, when N=50 it is p = 0.1128 and when N=250 it is p = 0.0004.At a significance level of α = 0.05, the former is insignificant and the latter is significant.


In this simulation data, the average of A is 105 and the average of B is 110, so there is a difference of 5 in the averages. However, with N = 50, the difference cannot be detected because the sample size is small, but with N = 250, the sample size has increased and a significant difference can be detected.


Because p-values ​​tend to be small even for small differences when the sample size is large , we use Cohen's d to check the size of the effect.


Operations in JMP (JMP 18)

  1. Select [Analyze] > [Bivariate Relationship]. Specify continuous scale for Y and nominal (ordinal) scale for X to display the "One-way ANOVA" report.
  2. From the red triangle button in the upper left of the report, select Means/ANOVA/Pooled t-Test .

⇒ The "Cohen's d" report will be output in the lower left corner of the "Pooled t-test" report.


Looking at this value, the values ​​are almost the same (about 0.32) when N = 50 and when N = 250. The magnitude of the effect is considered to be almost the same for both, but this indicates that the effect size is estimated regardless of the sample size*.


*The larger N, the more precise the effect size estimate.


Calculation

Cohen's d can be calculated using the following formula:


d = (Group B mean - Group A mean) ÷ Pooled standard deviation


The pooled standard deviation is the same as the root mean squared error (RMSE).


undefined


Example Calculation: Using the above values ​​displayed in the report for N = 250, you can calculate:


d = (109.010 - 104.391 ) ÷ 15.622 = 0.316


Estimate of effect size

As a guideline, the following criteria are used for Cohen's d:


Less than 0.2: no effect, 0.2-0.5: small effect

0.5-0.8: Medium effect, 0.8 or more: Large effect


Note: For grouping variables, JMP compares the difference between the later level and the previous level. In this example, the first level is A and the later level is B, so we compare the mean of B minus the mean of A.


Effect sizes for regression analysis: η 2 , ω 2

In JMP 17 and later, when you fit a regression model using the Standard Least Squares Fit Model method, the Effect Tests report can display the effect sizes η 2 and ω 2 .


The following example shows the effect sizes for the report when pain level is specified as [Y] and gender, drug, and gender*drug are specified as Model Effects for the JMP sample data set "Analgesics.jmp."


undefined


η2 and ω2 are known as effect sizes in analysis of variance, but they can also be applied to linear models when the explanatory variables are continuous.


In JMP (JMP 17 and later)

In the "Model Fit" report "Effect Tests", right-click on the statistics section and selectSelect Columns > η 2 or ω 2 .

undefined


Calculation

η2 and ω2 are indicators that show how much of the total variance is explained by that factor. For example, the effect size of gender is calculated using the following formula.


η 2 for sex = sum of squares for sex ÷ total (corrected) sum of squares

     = 73.808 ÷ 338.573 = 0.218


This shows what percentage of the total variance (sum of squares) is explained by gender. 0.218 indicates that gender explains 21.8% of the variance of Y.


η2 is an index that is easy to calculate and easy to understand, and its bias-corrected form is ω2 . The formula is omitted, but this can also be calculated from the statistics displayed in "Effect Test" and "Analysis of Variance."


Estimate of effect size

Proposed guidelines for the values ​​of η2 and ω2 are 0.01 for small effect size, 0.06 for medium effect size, and 0.14 for large effect size, but these are also only guidelines.


summary

The ASA Statement of Statistical Significance and P-Values ​​states, "Scientific conclusions, business, and policy decisions should not be based solely on whether the P-value exceeds a certain value."


In recent years, academic papers are increasingly being asked to include confidence intervals and effect sizes in addition to p-values.

It is a good idea to remember the effect size introduced here as one of the evaluation methods other than hypothesis testing.



by Naohiro Masukawa (JMP Japan)

Naohiro Masukawa - JMP User Community

This post originally written in Japanese and has been translated for your convenience. When you reply, it will also be translated back to Japanese.