cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Choose Language Hide Translation Bar
0 Kudos

Model Screening: Graphs and new metrics to compare methods : AICc/BIC, computation time, CO2 emissions ... ?

Hi JMP Team and Community,

 

Here are my proposals, mainly focussed for the "Model Screening" platform, but it perhaps could be extended to other platforms if it is relevant. I hope these ideas may not sound too crazy or "off".

 

  • What inspired this wish list request?
    • When comparing different methods through the "Model Screening" platform, the metrics used to compare the methods are R², and Mean/Std RASE. Even if it might already guide and highlight some choices regarding the "best methods", it may be sometimes difficult to compare and make a choice when results are similar across several methods.
    • There seems to be no indication of the "complexity" of the methods used in this platform (perhaps through an information criterion like AICc/BIC, or a simple computation time metric ("elapsed time" available in the red triangle but not by default)), which might be an interesting and important information if the model is going to be used in production, where model's latency could be an important parameter. 
    • Finally, as sustainability is one of the most important priority today, it might be interesting to compare models not only on their "statistical" performances, but also to take into consideration computation time/energy consumption through relevant metrics. 

 

  • What are the improvements you would like to see?  
    • Would it be possible to add graphs regarding the evolution of R² for different methods, and perhaps also a graph with Mean RASE (and Std RASE as intervals in graph) to better assess the differences between methods ?
    • Would it be possible to add other evaluation metrics, particularly a metric focussed on model complexity (AICc/BIC and/or computation time by default) ? This can help sort out similar models performances by promoting the most simple ones (as it should be done, always favor parcimonious/simple models if/when possible).
    • Since the computation time is easily recorded ("elapsed time" is available in the options of the red triangle), it would be great to highlight this information in the platform results as an indication for "model's sustainability/complexity". If possible, using this information and the informations of the equipment/device/computer used by JMP may enable to create indicators, such as the "Code Carbon" open-source initiative, providing CO2 emissions based on computation time, GPU/CPU, energy consumption : Motivation — CodeCarbon 2.0.0 documentation (mlco2.github.io)

 

  • Why is this idea important? 
    • As JMP is famous for its interactivity accross graphics and its ease of use/access through helpful graphs that help non-statisticians get the relevant informations, it would be interesting to consider adding some visualizations in the "Model Screening" platform, since the available graphs are "only" "Actual by Predicted" and "Profiler". There may be a lack of visualization for decision-making process about the choice of the method.
    • As JMP is oriented towards engineers and scientists, having a clear idea about the complexity of a model might help to choose the best compromise between precision, explainability and speed. Adding a complexity metric may also help JMP users consider it not only for exploration/discovery purposes, but also as a prototyping tool for production mode, where models latency is an important factor.
    • JMP already has a lot of people in its community that are dedicated or passionate about environment. The "Data for Green" also highlights the engagement of JMP towards sustainability and environment. Being able to provide an informations in this platform that may highlight the impact and help differentiate methods based on a relevant metric looks like a huge opportunity and a key differentiator compared to other softwares, as well as being able to raise awareness.

 

Looking forward to know your feedback, and if people are interested to comment, feel free

Happy new year and all the best, 

 

 

5 Comments
Status changed to: Acknowledged

Hi @Victor_G, thank you for your suggestion! We have captured your request and will take it under consideration.

mia_stephens
Staff
Status changed to: Investigating
 
mia_stephens
Staff

Hi @Victor_G ,

 

Are you looking for this sort of graph within Model Screening? Or, do you have other examples in mind?

 

image-20230320-174733.png

Related to model complexity, here's the response from the developer: "AICc and BIC are not available for many of the methods, such as the tree-based methods, or Neural, although there is some literature on how to approximate it. The predictive methods tend to create complex models that are limited by early-stopping using the validation set, so there is some restraint on complexity."

 

Mia

 

Victor_G
Super User

Hi @mia_stephens,

 

Yes, this kind of graph could be interesting to visualize performances from Model Screening platform.

Another option for RASE graphic in case of K-folds crossvalidation (on the Diamonds Dataset) to better assess the variability of the predictions by using StdDev RASE :

 

Victor_G_1-1679500997345.png


Ok for AICc and BIC calculations, thanks for the response !
Best,

mia_stephens
Staff

Thank you for this suggestion @Victor_G , I've shared with the developer for consideration.