I assume you have looked at the Model Comparison platform in JMP Pro. Unfortunately, I'm not familiar with the 'F1 score', so can you give further details or a link, please? Regarding you second question, are you thinking about predicted values, and the uncertainties in these?
Hi! Thank you so much for your reply. F1 score is a measure of test accuray. Here's link of wilipedia on F1 score. https://en.wikipedia.org/wiki/F1_score
And for the second question, yes, I'm thinking about the predicted values.
I am a bit surprised with your reply. It is not too much effort (especially, for a JMP expert) to web-search "F1 score", self-educate, and reply to the user with a meaningfull answer. Moreover, the inquirer provided additional info, but never received a meaningful support. That surprises too.
Would you or anyone else with the appropriate expertise help the JMP community with this question. F1 score is a key measure in logistic regression (or any model with nominal response) and imbalanced response levels (significantly more true negatives than true positives). Greatly appreciated. As noted, Wikipedia describes these concepts in details.
The Model Comparison link you provided is elaborate, but seems to be missing the F1 score computation. I'm sure this can be done by hand, but, if missing in JMP, should be strongly considered as a feature. Thanks in advance.
A wise man once said that 'the only scarce commodity is time', and I think it's understood that the Community relies on crowdsourcing knowledge (from users and JMP staff alike), and could not operate effectively if it's reliant on a small group.
In addition to the Commmunity, there is also the official support channel, which anyone who licenses a SAS product may use. One key difference is that there are service level agreements for response times, and internal processes for escalation to achieve satisfactory resolution. This is definitely the best mechanism for surfacing suspected bugs, but also for making new feature requests. In addition, 'How do I?' questions, or 'I'm used to software X but can't do the same thing in JMP' questions will also be answered, but within this more rigorous and predictable framework.
So, for bugs and features, or if the Community appears unresponsive, I would certainly consider this channel too. And, stating the obvious perhaps, it doesn't have to be 'one or the other'. Many Community threads have lead to support tracks.
This type of question can be directed to JMP Technical Support so we can get involved.
As for the question, JMP gives measures of Sensitivity, and 1-Specificity in the ROC Table when the "ROC Curve" menu item is selected, along with the True Positives, True Negatives, False Positives, and False Negatives. F1 is not specifically listed, but, from the description, it can be calculated with these measures.
The F1 scores (and other popular measures of accuracy) have been deprecated in some corners of the Machine Learning community for their performance under bias. Please reference the attached file, which suggests the utility of Receiver Operating Curves and other measures.
JMP does include ROC (and Lift Curves) in some platforms.