Hi JMP Team and Community,
Here are my proposals, mainly focussed for the "Model Screening" platform, but it perhaps could be extended to other platforms if it is relevant. I hope these ideas may not sound too crazy or "off".
What inspired this wish list request?
When comparing different methods through the "Model Screening" platform, the metrics used to compare the methods are R², and Mean/Std RASE. Even if it might already guide and highlight some choices regarding the "best methods", it may be sometimes difficult to compare and make a choice when results are similar across several methods.
There seems to be no indication of the "complexity" of the methods used in this platform (perhaps through an information criterion like AICc/BIC, or a simple computation time metric ("elapsed time" available in the red triangle but not by default)), which might be an interesting and important information if the model is going to be used in production, where model's latency could be an important parameter.
Finally, as sustainability is one of the most important priority today, it might be interesting to compare models not only on their "statistical" performances, but also to take into consideration computation time/energy consumption through relevant metrics.
What are the improvements you would like to see?
Would it be possible to add graphs regarding the evolution of R² for different methods, and perhaps also a graph with Mean RASE (and Std RASE as intervals in graph) to better assess the differences between methods ?
Would it be possible to add other evaluation metrics, particularly a metric focussed on model complexity (AICc/BIC and/or computation time by default) ? This can help sort out similar models performances by promoting the most simple ones (as it should be done, always favor parcimonious/simple models if/when possible).
Since the computation time is easily recorded ("elapsed time" is available in the options of the red triangle), it would be great to highlight this information in the platform results as an indication for "model's sustainability/complexity". If possible, using this information and the informations of the equipment/device/computer used by JMP may enable to create indicators, such as the "Code Carbon" open-source initiative, providing CO2 emissions based on computation time, GPU/CPU, energy consumption : Motivation — CodeCarbon 2.0.0 documentation (mlco2.github.io)
Why is this idea important?
As JMP is famous for its interactivity accross graphics and its ease of use/access through helpful graphs that help non-statisticians get the relevant informations, it would be interesting to consider adding some visualizations in the "Model Screening" platform, since the available graphs are "only" "Actual by Predicted" and "Profiler". There may be a lack of visualization for decision-making process about the choice of the method.
As JMP is oriented towards engineers and scientists, having a clear idea about the complexity of a model might help to choose the best compromise between precision, explainability and speed. Adding a complexity metric may also help JMP users consider it not only for exploration/discovery purposes, but also as a prototyping tool for production mode, where models latency is an important factor.
JMP already has a lot of people in its community that are dedicated or passionate about environment. The "Data for Green" also highlights the engagement of JMP towards sustainability and environment. Being able to provide an informations in this platform that may highlight the impact and help differentiate methods based on a relevant metric looks like a huge opportunity and a key differentiator compared to other softwares, as well as being able to raise awareness.
Looking forward to know your feedback, and if people are interested to comment, feel free :)
Happy new year and all the best,
... View more