In the first episode, Jami covers the JMP data structure and how to import data. In the second episode, she demonstrates how to use column and row functions, including recoding columns, modifying column properties, selecting matching cells, adding value and column labels, compressing selected columns and stacking, splitting, joining, and sorting tables. In the third episode, Jami covers table functions and Tabulate, and demonstrates how to find and handle missing data, subset data, stack data, split data, sort tables by row or column, join tables, and use the JMP tabulate capabilities to group and summarize data.
Scott Wise’s videos on Exploratory Data Analysis and Dynamic Graphics were recorded during the Jan. 27 Mastering JMP live webcast. Wise uses a supply chain case study to show how one might use JMP to understand and correct late shipments impacting profitability.
In the first episode, Scott describes the case study and business problem, and then shows how to use JMP to examine relationships, patterns and outliers to gain critical insight into the problem. In the second episode, he shows how to use Distribution, Data Filter, Recursive Partitioning, Contingency Analysis, One-Way Analysis and more to uncover key variables impacting late shipments. In the third episode, he models the data to predict and provide information for correcting late shipments. He uses Fit Model, Parameter Estimates, Prediction Profiler and more.
In the first episode, Sam defines three goals of data mining: discovering patterns, uncovering relationships and building predictive models. He uses paper print banding data to uncover important factors that drive quality and oil field recovery data to identify important factors impacting the profitability of oil recovery efforts. In the second episode, Sam answers questions about the analyses techniques he used when mining the print banding data. In the third episode, he shows how to build predictive models using JMP regression, confusion matrices and decision trees. He uses JMP Pro bootstrap (random) forests and boosted trees to build predictive models and covers the use of training, validation and test data. He closes by briefly describing bootstrap aggregation (bagging) and boosting.