cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Try the Materials Informatics Toolkit, which is designed to easily handle SMILES data. This and other helpful add-ins are available in the JMP® Marketplace
Choose Language Hide Translation Bar
maryam_nourmand
Level III

logistic regression

hello.
my question is how can I calculate the probability of each data sample belonging to a specific class in logistic regression?

1 ACCEPTED SOLUTION

Accepted Solutions
Victor_G
Super User

Re: logistic regression

Hi @maryam_nourmand,

 

Welcome in the Community !

 

If you're using Logistic Regression, you have the possibility to save probability formulas for your classes : Logistic Platform Options (jmp.com)
Simply click on red triangle, and then on "Save Probability Formula" :

Victor_G_0-1714640898693.png

New columns will be added in your datatable, with probabilities for each class for every rows of your datatable, as well as Most Likely Class (default threshold is 0.5) : 

Victor_G_1-1714640998047.png

Example here is from Titanic Passengers dataset, available in menu "Help", "Sample Index", "Exploratory Modeling", Titanic Passengers.

I hope this will answer your question,

Victor GUILLER

"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)

View solution in original post

8 REPLIES 8
Victor_G
Super User

Re: logistic regression

Hi @maryam_nourmand,

 

Welcome in the Community !

 

If you're using Logistic Regression, you have the possibility to save probability formulas for your classes : Logistic Platform Options (jmp.com)
Simply click on red triangle, and then on "Save Probability Formula" :

Victor_G_0-1714640898693.png

New columns will be added in your datatable, with probabilities for each class for every rows of your datatable, as well as Most Likely Class (default threshold is 0.5) : 

Victor_G_1-1714640998047.png

Example here is from Titanic Passengers dataset, available in menu "Help", "Sample Index", "Exploratory Modeling", Titanic Passengers.

I hope this will answer your question,

Victor GUILLER

"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)
maryam_nourmand
Level III

Re: logistic regression

tanx for your responding.
if i want use robust logistic regression what should i do?

Victor_G
Super User

Re: logistic regression

Hi @maryam_nourmand,

 

What do you mean by "robust" logistic regression ?

In JMP Pro, you have several penalized estimation methods for the Generalized logistic regression : Lasso, Ridge, Elastic Net ... :

Victor_G_0-1714743614013.png

 

Is it what you're looking for ?

Victor GUILLER

"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)
maryam_nourmand
Level III

Re: logistic regression

in Nominal Logistic Fit part , i couldnt find any method for estimate parameters..
my intention with robustness is to ensure that the estimation of logistic regression model parameters is not influenced by outliers

Victor_G
Super User

Re: logistic regression

I think the Response Screening (jmp.com) platform might be what you're looking for ?

It enables to reduce the sensitivity of tests to outliers.

Victor GUILLER

"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)
maryam_nourmand
Level III

Re: logistic regression

tanx

 

jthi
Super User

Re: logistic regression

Run your model (https://www.jmp.com/support/help/en/18.0/#page/jmp/logistic-regression-models.shtml#) and then save the probability formula to your table 

jthi_0-1714641071250.png

-Jarmo
dlehman1
Level V

Re: logistic regression

I would add two things to Victor and jthi's responses.  First, when you save the probability formula, the way in which the probabilities are calculated from the log of the odds can be seen by the formulas in those probability columns.  This is the link between the response variable, which is actually the log of the odds rather than the discrete response variable, and the estimated probabilities.  Second, the "most likely" prediction is based on which probability is greater - in other words, it uses a 50% probability cutoff for making predictions.  That is rarely the best cutoff in application, particularly because the costs associated with false positive and false negative predictions are rarely symmetric.  When you run the logistic regression, you can find "Decision Threshold" under the red arrow which allows you to explore different probability cutoffs and the resulting classifications.