cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
] />

Learn JMP Events

Events designed to further your knowledge and exploration of JMP.
Choose Language Hide Translation Bar

Basic Data Analysis and Modeling

Published on ‎11-19-2025 02:54 PM by Community Manager Community Manager | Updated on ‎03-17-2026 01:22 PM

JMP offers a variety of ways to interactively examine and model the relationship between an output variable (response) and one or more input variables (factors).  It also offers tools for explanatory modeling to determine which variables help explain a response.

See how to:

  • Answer practical questions about a process with basic analysis using a coffee packaging example.
    • What does my peel strength data tell me about product quality?
    • Do the two formulations differ significantly?
    • Is peel strength related to another factor?
  • Identify and interpret statistics for the distribution of data, including 1-sample t-test,  capability and tolerance intervals.
  • Compare formulation using Fit Y by X, 2-sample test, unequal variances, ANOVA and Tukey-Kramer multiple comparisons.
  • Explore relationships between pairs of variables using process capability and simple linear regression.

This webinar covers: Distribution and Fit Y by X.

Didn't they know how to bag the coffee so we could open it easily without spilling it?Didn't they know how to bag the coffee so we could open it easily without spilling it?

At the live webinar, Marilyn responded to a question about setting prferences by demostrating two different methods.

Resources



Start:
Fri, Mar 13, 2026 02:00 PM EDT
End:
Fri, Mar 13, 2026 03:00 PM EDT
Attachments
1 Comment
PatrickGiuliano
Staff

Hi Everyone!  Thank you to those of you in attendance live today for our Mastering Session on Basic Statistical Analysis, taught by the excellent @MarilynWheatley and moderated by my colleagues @gail_massari and @Jeff_Upton 

Here is the article that I shared in today's Mastering Webinar during our Q&A Session: Moving to a World Beyond “p < 0.05”

It's basically a call to move away from automatic, threshold-based statistical decision making and towards a more transparent, and context-aware, uncertainty-embracing statistical reasoning framework.

The idea here is not to admonish the use of p-values with strict cut-offs for significance, but to stop letting them stand in for real scientific judgment.  Among other ideas and suggestions for thoughtful analysis, the use of p-values is suggested if at all as a descriptive, rather than a gate-keeping mechanism. 

  • For example, continuous p-values may still be reported, but:
    • as exact values (e.g., p = 0.08),
    • without labels like “significant” or “nonsignificant,”
    • and always alongside effect sizes and uncertainty.
  • And certainly, p-values should never dominate interpretation.

Other directions include the suggestion to embrace uncertainty in statistical decision-making rather than trying to eliminate it. 

Statistical inference is not equivalent to scientific inference, and limitations should be acknowledged in both with careful analysis and a thoughtful risk-based framework.

To summarize the question from the audience today: "What is a good cut-off for the p-value?"

The p<0.05 threshold is a holdover with historical underpinnings, and while this article suggests that small differences in p-values (e.g., 0.049 vs 0.051) do not justify categorical differences in interpretation, I recognize that there are many contexts (business and regulatory environments) where a cut-off decision is required for operational consistency and for careful and 'honest' analysis based on committed acceptance criteria. 

Above all, a rigorous way to establish the p-value threshold is by establishing a so-called "standard of evidence" (alpha) before we run our study.  Alpha allows us to specify in advance how often we are tolerating a false-alarm (where a false-alarm is  a false-rejection of the null hypothesis, that is, a situation where we detected something that isn't actually a real signal).  

The p-value indicates how likely we are to observe the result that we got under the null hypothesis. Our cut-off, alpha (often historically by convention equal to 0.05) is what we compare it to. If p≥alpha, we do not reject the null, that is, we retain the null as a plausible explanation.  If p<alpha, then we reject the null, that is, we assert the alternate.

But critically, the p-value tells us how likely we are to obtain our result, under the null hypothesis.

 

As always we are here to assist you with questions you may have about the use of JMP Software. 

-JMP Technical Support (support@jmp.com)