cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Try the Materials Informatics Toolkit, which is designed to easily handle SMILES data. This and other helpful add-ins are available in the JMP® Marketplace
Choose Language Hide Translation Bar
deu4455
Level I

Interpreting measures of accuracy for classification tree

I would like to know if the misclassification rate I am getting is acceptable. Whare ranges are considered acceptable? How do I relate this rate to the confusion matrix results? In essence, I am trying to evaluate and explain my results. In addition, how do I know if the Lift Curve resulls I am getting are also acceptable? Is there a range, which can be deemed as acceptable or good?

1 ACCEPTED SOLUTION

Accepted Solutions
KarenC
Super User (Alumni)

Re: Interpreting measures of accuracy for classification tree

Hello,


There is no one answer to your question. The questions you are asking is going to depend on your circumstances. For example, compare marketing misclassification rates to medical misclassification rates. What is acceptable in those two fields is going to be very different. In addition, in some applications the overall misclassification rate may be the key metric whereas in other instances you may be more concerened in just one portion of your confusion matrix results.  For example, consider medicine, the risk misclassifing a negative subject is often different than the risk of misclassifing a positive subject (providing treament that is not needed vs. not providing treatment that is needed).  You have to take your statistics and put them in context of your questions, then interpret.

View solution in original post

1 REPLY 1
KarenC
Super User (Alumni)

Re: Interpreting measures of accuracy for classification tree

Hello,


There is no one answer to your question. The questions you are asking is going to depend on your circumstances. For example, compare marketing misclassification rates to medical misclassification rates. What is acceptable in those two fields is going to be very different. In addition, in some applications the overall misclassification rate may be the key metric whereas in other instances you may be more concerened in just one portion of your confusion matrix results.  For example, consider medicine, the risk misclassifing a negative subject is often different than the risk of misclassifing a positive subject (providing treament that is not needed vs. not providing treatment that is needed).  You have to take your statistics and put them in context of your questions, then interpret.