Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- JMP User Community
- :
- Discussions
- :
- Discussions
- :
- Misclassification rate, who is right?

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Jan 8, 2016 7:50 PM
(6094 views)

Hi! JMP fans and experts

In Logistic regression, partition and Neural Network model cmparising, Misclassification rate acts as an important measure to determine which model is better or not? In my understanding, Smaller the value of misclassification rate is, better the model will be; however, I was confused by following cases. In the first case, I was told that larger one is better, but in the second case, I was told that smaller one is better.

JMP fans and experts, who can walk me through this problem? And give me more interpretation on that scenario.

1. First case

2. The second case

1 ACCEPTED SOLUTION

Accepted Solutions

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Are you using some sort of 'expert' system to produce the narratives associated with each table? The second case is obviously JMP related output from the JMP Pro Model Comparison platform. It's not clear to me what analytics package produced the picture of case #1.

Having asked this though, my guess is at least in case one, the definitive criteria that is being used to determine the 'best' model is the AICc statistic and not the misclassification rate. The AICc statistic is one of those 'lower is better' regression diagnostics...so since that model has the lowest AICc...the system is picking that model as the 'best'.

A bit of a general regression/modeling diagnostics interpretation tutorial follows. I scream from the rooftops whenever I can to NEVER use ONE single regression diagnostic measure, plot, or estimate to determine a 'best' model. At the end of every modeling problem I encourage decision makers to always determine your 'best' model with, "What model helps me solve my practical problem best?" That's the model you should pick...not the model with the highest R**2, F-ratio, AICc, or most attractive looking residual plot...or any other mathematical/graphical construct.

Especially in classification type problems...there are often very different practical consequences for the different categories of misclassification...the misclassification rate is the sum of ALL misclassifications. Think cancer diagnostics as one example...much different consequences for false negatives or false positives. So I might pick a 'best' model that minimizes the PRACTICAL consequences of one of the misclassification categories...at the expense of a higher rate in the other.

1 REPLY 1

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Are you using some sort of 'expert' system to produce the narratives associated with each table? The second case is obviously JMP related output from the JMP Pro Model Comparison platform. It's not clear to me what analytics package produced the picture of case #1.

Having asked this though, my guess is at least in case one, the definitive criteria that is being used to determine the 'best' model is the AICc statistic and not the misclassification rate. The AICc statistic is one of those 'lower is better' regression diagnostics...so since that model has the lowest AICc...the system is picking that model as the 'best'.

A bit of a general regression/modeling diagnostics interpretation tutorial follows. I scream from the rooftops whenever I can to NEVER use ONE single regression diagnostic measure, plot, or estimate to determine a 'best' model. At the end of every modeling problem I encourage decision makers to always determine your 'best' model with, "What model helps me solve my practical problem best?" That's the model you should pick...not the model with the highest R**2, F-ratio, AICc, or most attractive looking residual plot...or any other mathematical/graphical construct.

Especially in classification type problems...there are often very different practical consequences for the different categories of misclassification...the misclassification rate is the sum of ALL misclassifications. Think cancer diagnostics as one example...much different consequences for false negatives or false positives. So I might pick a 'best' model that minimizes the PRACTICAL consequences of one of the misclassification categories...at the expense of a higher rate in the other.