cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
JMP Wish List

We want to hear your ideas for improving JMP software.

  1. Search: Please search for an existing idea first before submitting a new idea.
  2. Submit: Post your new idea using the Suggest an Idea button. Please submit one actionable idea per post rather than a single post with multiple ideas.
  3. Kudo & Comment Kudo ideas you like, and comment to add to an idea.
  4. Subscribe: Follow the status of ideas you like. Refer to status definitions to understand where an idea is in its lifecycle. (You are automatically subscribed to ideas you've submitted or commented on.)

We consider several factors when looking for what ideas to add to JMP. This includes what will have the greatest benefit to our customers based on scope, needs and current resources. Product ideas help us decide what features to work on next. Additionally, we often look to ideas for inspiration on how to add value to developments already in our pipeline or enhancements to new or existing features.

Choose Language Hide Translation Bar
0 Kudos

Model Screening Platform: "Remember Settings" Option under Decision Threshold Profiler

I really like the new Model Screening platform option in JMP under Analyze > Predictive Modeling > Model Screening, as referenced in: Model Screening in JMP Pro 16 

 

https://community.jmp.com/t5/JMP-Blog/Model-Screening-in-JMP-Pro-16/bc-p/384039#M4078

PatrickGiuliano_0-1620693265723.png

 

One feature I think would be especially useful for me - in the context of this example or otherwise - would be an option similar to the Prediction Profiler when I am specifying fitting models in certain platforms in JMP (like Analyze > Fit Model) - to "Remember Settings" so that I can effectively 'toggle' between different tradeoff scenarios for false positive and false negative rates. 

 

The option to "Profile" the level setting on the decision threshold probability in order the tune the model to focus in on minimizing false negatives at the potential expense of increasing false positives is the scenario I am particularly interested in. From a quality-risk perspective (protecting the end-user at the cost of higher mfg scrap), higher false alarms can be tolerated.

 

Here is a picture which illustrates what I am talking about (just a plain-vanilla "Remember Settings" output from the Fit Model Platform under the Prediction Profiler after selecting "Remember Settings" at various combinations of input settings of X's on Y (same Diabetes.jmp sample dataset example):

 

PatrickGiuliano_1-1620693546224.png

Thanks!

 

1 Comment
dale_lehman
Level VII

I would request a clarification of this wish:  do you want the same threshold probability to apply to all models or are you asking for some "optimal" threshold for each model to be remembered and compared?  If the former, I think it is a bad idea.  The distribution of probabilities varies considerably across models, so different thresholds are likely to be chosen for different models.  If the latter, then I'm not sure how this would be implemented (but I like the idea).  It can't be automatic, such as minimize the false negatives, since that would simply set the threshold probabilities = 0.  So, it would need to allow you to choose different thresholds for each model according to how you view the tradeoffs - perhaps this could be used in conjunction with the profit matrix (though I find the profit matrix very difficult to use). 

 

Having a simple slider that changes the threshold for all models and shows the confusion matrix would not be very useful to me. I think it would provide an illusion of relative model performance - I don't really care whether one model calls for a threshold of 0.15 and another for 0.30 - what I care about is how each model can be used to make predictions.