Choose Language Hide Translation Bar

Improving Machine Learning Using Space Filling DOE to Tune Hyperparameters

Tuning hyperparameters is crucial for optimizing machine learning models, but the process can be computationally expensive and complex. Traditional grid, random search, or even Bayesian optimization methods often miss critical areas of the hyperparameter space, leading to suboptimal models.

In this talk, we show a JMP add-in we have developed that uses space-filling DOE to more efficiently approach the hyperparameter tunning challenge. The use of space-filling DOE ensures that hyperparameter combinations are sampled more evenly across the entire parameter space, thus reducing the number of required evaluations while increasing the likelihood of finding optimal settings.

This talk also highlights the improved integration with Python found in JMP 18 and how leveraging capabilities like DOE inside JMP can be beneficial to data scientists. This talk combines advanced statistical techniques with practical, accessible tools to enhance model performance in diverse applications.