Hi @CrhistianDoE,
For DoE data, I would say neither of these options would be my first choice, for different reasons.
Neural Networks and Lasso (from Generalized Regression) are two very different analysis techniques:
- Neural Networks are complex Machine Learning models able to approximate any functions with sufficient layers and neurons (see The Universal Approximation Theorem: https://en.m.wikipedia.org/wiki/Universal_approximation_theorem), that require careful construction with appropriate number of layers and neurons and more importantly, a good validation strategy. As Neural Networks can be very flexible in the modeling and approximation of any function, you can quickly tend to overfitting without a good validation strategy and start modeling noise in your data. A test set (or test points/new points in your design space of the DOE) is also highly recommended to make sure the model from the NN is able to generalize well to new unknown data points. More explanations about Neural Networks and their cautious use in these previous posts :
Difference between nodes in same layer of neural network?
Model Screening: Neural network / K-fold crossvalidation
help with model comparsion: DOE vs ANN boosted
- Lasso from Generalized Regression is a penalization technique in regression models that is very helpful in the context of large datasets with possible high collinearity among variables/factors and/or when you have more variables than observations, as it helps do features/variables selection: Lasso helps select the most important variables from correlated variables, and will shrink the coefficient of the non-selected correlated variables to 0 (this is why it is very helpful in the context of features selection).
See more here : Overview of the Generalized Regression Personality (jmp.com)
In your case with DoE data, you have a small but carefully planned dataset, with an assumed model if you have used Custom design platform or traditional mixture design, or you have an homogeneous repartition of points in your design space if you have created a Space-Filling design.
- With Custom/traditional mixture design, since the model is already assumed with the design creation, I would not start with Lasso, SVEM or any complex technique. Using Neural Network wouldn't be appropriate either, as the repartition of points is based on a model and may not provide a sufficiently homogeneous/good coverage of your design space. So there might be not enough levels for each of your factors to use a high-performance and complex interpolation model like Neural Network.
Start with the assumed model and the model's terms included with a "traditional" regression (Standard Least Squares), and visualize/analyze your model fit.
How did the plots "Actual vs. Predicted" look like ? How are the residuals ? etc...
If you see your model may not be complex enough to capture the links between your 3 mixture variables, try to augment your design to include higher order terms. Take into consideration that mixture designs put the emphasis on predictivity, so it would be better to do model refinement on other criteria than p-values, for example Maximum Likelihood, RMSE, Information Criteria like AICc or BIC, ...
Some posts about this topic : Solved: Re: Analysis of a Mixture DOE with stepwise regression - JMP User Community
Solved: Scheffe Cubic Mixture and p-value - JMP User Community
If you want to use Generalized Regression platforms, there may be other and better suited estimation methods, like "Pruned Forward Selection" (with AICc/BIC validation method), "Two-Stage Forward Selection" or eventually "Best Subset" (only if you have a limited number of factors, as it will test many models with main effects and many combinations of the higher order terms).
- With Space-Filling designs, there are many Machine Learning methods you can (and should) try before considering Neural Networks. Typically, Random Forests and Support Vector Machines are interesting ML algorithms :
They have good predictive performances on small datasets, are less prone to overfitting (more simple), are quite robust to outliers, have good generalization performances, handles non-linear situations, require few hyperparameters tuning (Random Forests perform very well with default settings and you can easily fine-tune SVM with the incorporated option Tuning design), are easier to interpret, etc...
Random Forest (called Bootstrap Forests in JMP) provides a good benchmark for model comparison and is often a default choice to test. Some personal posts on LinkedIn written on these two algorithms for DoE datasets :
- Random Forests : https://www.linkedin.com/posts/victorguiller_doe-machinelearning-randomforests-activity-712755779981...
- Support Vector Machine : https://www.linkedin.com/posts/victorguiller_doe-machinelearning-statsjoke-activity-7117395916130537...
I hope this (long) answer will help you,
Victor GUILLER
L'Oréal Data & Analytics
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)