Subscribe Bookmark RSS Feed

Help choosing JMP Analysis for Multiple Factors

jb

Community Trekker

Joined:

Sep 23, 2015

Hello!  I am using JMP 13 (regular) and learning about predictive and specialized modeling.

 

I have a dataset (attached) with over 100,000 observations of a process.  75% of the observations had a duration of 4 days or less.  I'd like to know why some observations took over 4 days.  I've identified some possible factors:  1 continuous and 16 categorical.

 

Can you please suggest one or more JMP analyses that I could apply to the data?

1 ACCEPTED SOLUTION

Accepted Solutions
Peter_Bartell

Joined:

Jun 5, 2014

Solution

Whenever I start a predictive modeling exercise, especially if I inherited the data from somewhere else with little knowledge of how, where, when, and under what circumstances the data were collected, I spend some time in what I call 'getting acquainted with the data' mode. I look for things like data quality, unusual or supicious observations, missing values (you have none of these), nonsense values, and any other feature that sticks out at me that might make modeling problematic. I always start with the Distribution platform to just get a feel for "Where's the middle, how spread out is the data, and is there anything odd or unusual going on?" From there especially with a relatively small set of predictor variables, I just use the Fit Y by X platform to look for relationships between predictors and responses...and compare what I see with my process/domain knowledge. If a scatter plot proves that 'water runs uphill' (in other words is counter known laws of physics, chemistry, biology, socioeconomic behavior, etc.) then I start to get suspicious and suspend the modeling work until I get to the bottom of the issues.

 

Data cleaning and prep is never fun...and takes work...but it's absolutely necessary.

8 REPLIES
chris_kirchberg

Joined:

May 28, 2014

I take it the Y column is the number of days? IF so, one can make an indicator column to represent those observations that are > 4 days and then use that for modeling to see which variables have the greatest impact on predicting the indictor variable.

 

Chris

jb

Community Trekker

Joined:

Sep 23, 2015

Hi Chris. Yes, the Y column is the number of days. Thanks for the suggestion to create an indicator column based on it.

chris_kirchberg

Joined:

May 28, 2014

You're welcome,
Decision Trees are a good quick way to see the possible impact and you can use an indicator column.
A simple column formula like:
If(:Hours > 4, 1, 0)

You can then change the model type to nominal and use as the response.

Best,

Chris

ih

Contributor

Joined:

Sep 30, 2016

Peter_Bartell

Joined:

Jun 5, 2014

I took a quick look at the raw data. Are you aware that a few of the X variables are only at "1"?

Peter_Bartell

Joined:

Jun 5, 2014

One other feature with the raw data, there is a high proportion of the x categorical variables at the '1' or '0' level with a relatively low proportion at the the other level for that particular variable...in the 99% to 1% and higher/lower range for many of them. I'd make sure I spend some time pondering my cross validation and model validation schemes. Since you aren't running JMP Pro this is gonna create some work for you...but with very few observations of all the categorical variables at one of the levels for any one x categorical variable, I worry about cross validation, overfitting or just not enough observations at the lower proportion level for a signal to rise above the noise. 

jb

Community Trekker

Joined:

Sep 23, 2015

Hi Peter, thanks for the feedback on the categorical variables in my data.  I didn't realize some only had values of only 1, and others had values in very low proportions.  I think I'll go back and learn more about these categorical variables to see if I can exclude them from my analysis.

 

Peter_Bartell

Joined:

Jun 5, 2014

Solution

Whenever I start a predictive modeling exercise, especially if I inherited the data from somewhere else with little knowledge of how, where, when, and under what circumstances the data were collected, I spend some time in what I call 'getting acquainted with the data' mode. I look for things like data quality, unusual or supicious observations, missing values (you have none of these), nonsense values, and any other feature that sticks out at me that might make modeling problematic. I always start with the Distribution platform to just get a feel for "Where's the middle, how spread out is the data, and is there anything odd or unusual going on?" From there especially with a relatively small set of predictor variables, I just use the Fit Y by X platform to look for relationships between predictors and responses...and compare what I see with my process/domain knowledge. If a scatter plot proves that 'water runs uphill' (in other words is counter known laws of physics, chemistry, biology, socioeconomic behavior, etc.) then I start to get suspicious and suspend the modeling work until I get to the bottom of the issues.

 

Data cleaning and prep is never fun...and takes work...but it's absolutely necessary.