Updates
November 27, 2023 - Version 1.1 - improvements and bug fixes, details are in a post below
November 28, 2023 - Version 1.11 - minor bug fix
December 15, 2023 - Version 1.2 - improvements and bug fixes, details are in a post below
Purpose
The Neural Network Tuning add-in is an alternative Neural Network platform that provides an easy way to generate numerous neural networks to identify the best set of parameters for the hidden layer structure, boosting, and fitting options.
How it works
The user specifies the number of Neural Network models that will be run and sets the range of parameters and fitting options. A Fast Flexible Space Filling DOE is generated with the tuning parameters as Factors, and the Training, Validation, and Testing R2 values as Responses.
Each trial in the DOE is passed to the Neural Network platform and the R2 values are recorded in an output DOE table. After all models are run, the table is sorted by Validation R2 and a series of graphs are displayed to help the user identify the most important effects for maximizing the Validation R2 values.
Additional models can be generated by adjusting the tuning parameters and re-running the DOE. The new runs will be appended to the original table.
The data table can be saved and re-loaded into the platform to save progress and re-start additional analysis. This is a way to tune Neural Networks that might take considerable computation time.
Video Guide
Notes
The add-in has been tested on Windows and Mac using JMP Pro Version 17.1 and 17.2.
Known Issues
- The add-in does not currently handle a Response (Y) column that is virtually linked to the factor data table. As a work-around, you can unhide the linked column and then cut/paste the data into a new column.