The segmentation points of each body weight were calculated by combining and classifying the "Big Class.jmp", age and sex.
Each weight in each segment was then compared to these segment points to calculate the weight classification.
dt = Open( "$SAMPLE_DATA/Big Class.jmp" );
New Column( "TYP" );
Column( "TYP" ) << Formula( Char( age ) || sex );
dt << run formulas;
Column( "TYP" ) << deleteFormula;
d2 = dt << Summary(
Group( :TYP ),
Quantiles( 25, :weight ),
Median( :weight ),
Quantiles( 75, :weight ),
Freq( "None" ),
Weight( "None" ),
Link to original data table( 0 )
);
Column( d2, 3 ) << set name( "A" );
Column( d2, 4 ) << set name( "B" );
Column( d2, 5 ) << set name( "C" );
Current Data Table( dt );
dt << Update( With( d2 ), Match Columns( TYP = TYP ) );
Column( "N Rows" ) << Formula( If( weight > C, 4, If( weight > B, 3, If( weight > A, 2, 1 ) ) ) );
dt << run formulas;
Column( "N Rows" ) << deleteFormula;
If the data is large, and the classification, and segmentation points are large.
How can this "weight" classification be made faster?
Thanks Experts!