At the Oct. 13, 2011, Mastering JMP webcast, Systems Engineer Aashish Majethia demonstrated how to use JMP and JMP Pro to model high-dimensional data. These techniques are useful when you have a lot of indicators (columns) but few observations (rows), a situation that makes modeling using linear regression techniques difficult.

Aashish demonstrated some useful techniques, now available as two on-demand demos.

He uses JMP for Principal Component Analysis (PCA), Partial Least Squares (PLA) and stepwise regression plus uses JMP Pro for boosted trees, bootstrap forests and neural nets. PCA helps explain variability by creating a set of uncorrelated variables from the original set of observations. PLS generates latent factors from the predictors and the observable data. Bootstrap forests and boosted trees let you avoid making predictions that are overly optimistic or skewed by the specific data used. Neural nets allow you to determine which data to use for cross-validation, fit one- and two-layer networks with your choice of activation functions and automate the handling of missing values.

The data tables Aashish uses in the demos are available from within JMP as Sample Data (HELP>SAMPLE DATA).

We close 2012 with two more Thursday live webcasts. On Nov. 10, Mary Loveless will demonstrate how to analyze sensory data to develop consumer products, and on Nov. 17, Jeff Perkinson will show how to build custom maps that help visualize analyses. Interested? Simply register and then join us.

You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.