Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

Highlighted

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Sep 21, 2018 9:09 AM
(5266 views)

Hi JMP Community,

I have been studying the Nominal Logistic Fit to determine the value of a Baseline Biomarker to predict the outcome of a Clinical Treatment.

I thought that I understood the concept of the Confusion Matrix: it returns the numbers of True Positive, True Negative, False Positive, and False Negative for a given model for the Training data set and, if defined, the Validation set. However, when I compare the Confusion Matrix to the best outcome from the ROC Table (Maximum SENSITIVITY - (1 - SPECIFICITY) value), I struggle to reconcile the two.

For example I have a a model with a ROC AUC = 0.654 (rather weak association) where the Confusion Matrix returns:

Predicted | Predicted | ||

YES | NO | ||

ACTUAL | YES | 1 | 82 |

ACTUAL | NO | 1 | 278 |

--> which is really bad (actually worst than expected for the ROC AUC value).

For the same model, the ROC Table best combination of SENSITIVITY and SPECIFICITY is:

Prob | 1-SPEC | SENS | SENS - (1-SPEC) | True Pos | True Neg | False Pos | False Neg |

0.2797 | 0.2437 | 0.5060 | 0.2623 | 42 | 211 | 68 | 41 |

--> which is quite bad but more in line with expected outcome of a model with a ROC AUC = 0.654

So, my questions are:

- What is the main difference between Confusion Matrix and the "best" row of the ROC Table?
- Is it because the former use the highest Probability and the latter uses the best SENSITIVITY and SPECIFICITY combination?

- If I were to present these results, what would be the best option to present the Positive Predictive Value and the Negative Predictive Value?

Thank you for your help.

Sincerely,

TS

Thierry R. Sornasse

1 ACCEPTED SOLUTION

Accepted Solutions

Highlighted

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

The confusion matrix and ROC are different. You understand the confusion matrix as described. In this example, you have practically no sensitivity (1/83) but quite good specificity (278/279).

The ROC simultaneously evaluates both sensitivity and specificity so overall it looks a bit better than chance (AUC = 0.654).

The confusion matrix is for one cutoff and the ROC curve uses each observation as a cutoff, including the observation that produces the largest separation.

Learn it once, use it forever!

1 REPLY 1

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

The confusion matrix and ROC are different. You understand the confusion matrix as described. In this example, you have practically no sensitivity (1/83) but quite good specificity (278/279).

The ROC simultaneously evaluates both sensitivity and specificity so overall it looks a bit better than chance (AUC = 0.654).

The confusion matrix is for one cutoff and the ROC curve uses each observation as a cutoff, including the observation that produces the largest separation.

Learn it once, use it forever!

Article Labels

There are no labels assigned to this post.