Choose Language Hide Translation Bar
Community Manager Community Manager
A Brief Demo of Neural Nets in JMP

Often we assume there is a complicated relationship between explanatory variables and responses. In these cases, neural networks (neural nets) are useful and can enable us to predict responses from a flexible network of functions of input variables. Neural nets can efficiently and flexibly model different response surfaces when it is not necessary to know the functional form of the response surface.

In a recent Webcast, Valerie Hyde, JMP Systems Engineer, gave a brief demo using JMP neural nets to find predictors of auto insurance claims. She used a training set of 70% of the data to fit the data, and cross validated the training set with the remaining 30% of the data (holdback set).

In the demo, she builds a neural net that has five explanatory variables (input nodes), one response (output node) and three intermediate nodes that have relationships with the inputs and output.

She shows how to examine the R2 and the crossvalidated R2, tells how to determine if it is beneficial to add more nodes and then refit, and describes how to evaluate the result of the refitting.

Want to learn more about neural nets in JMP, including changing maximum iterations, overfit penalty and coverage criterion? See Chapter 28 in the Statistics and Graphics Guide, accessible from JMP Help>Books>JMP Stat and Graph Guide.

Also, be sure to check our live Webcast schedule in September, when we post our 4Q Webcasts. Valerie will do a one-hour presentation focused on neural nets.