Our World Statistics Day conversations have been a great reminder of how much statistics can inform our lives. Do you have an example of how statistics has made a difference in your life? Share your story with the Community!
Jun 9, 2014 6:34 AM
| Last Modified: May 12, 2017 2:59 AM
This add-in helps you fit one or more calibration curves with either a four-parameter logistic or straight-line model and then use the curves to compute inverse predictions on a set of unknowns. The main data and the unknowns data must be stored in separate JMP tables. The inverse predictions and their confidence limits are copied back into the unknowns table in new columns.
The input data sets must be in stacked format, that is, with one row per individual response. One column must contain the log concentrations (the X values) and another must contain the responses (the Y values). If you are fitting multiple curves, one or more columns must identify the “By” groups of data, and identical columns must be in the standards and unknowns tables. You can optionally specify a weighting variable.
You can optionally perform tail filtering on the data. This approach successively removes points from the tails of the data and refits the model, tracking the R-Squared statistic for each fit. The model with the largest R-Squared is retained for final analysis and inverse predictions.
To install the add-in, download Calibration Curves.jmpaddin, drag it onto an open JMP window, and click "Install". Two examples (each with two data sets) are also available for download. Some screenshots are below. For more details, click Add-Ins > Calibration Curves > Help after installing the add-in.