Altug, I've tested my code a bit, and although it avoids explicit looping, which is good, the << get all values as matrix () function is simply too expensive when the table gets big, and the gains (if any) of vstd() vs. table ops, are too miniscule to offset this.
For a table of your size, a modification of Jim's code (to allow for your groups) is going to be faster. Central to this is the fact that JMP table operations are really fast. I'm not even convinced that the matrix ops, which are also really fast, are faster than the column functions.
Subsetting the tables initially to create two subtables--one for each group--more than pays for itself (assuming you have enough memory to house all 3 tables), as taking subsets of rows over and over is expensive.
The code below (again, a slight modification of Jim's, to allow for your two groups) ran for me in about 2.5 seconds for 100K rows and 1K columns, whereas the routine I first submitted took over 6 seconds.
This will still seem a bit slow on a table your size... hopefully someone else will have a better idea.
Names Default To Here( 1 );
dt = current datatable();
dtcol1 = column(dt, 1);
dt1 = dt << subset(rows(dt<<get rows Where(ascolumn(dtcol1) == dtcol1[1])), invisible, selected columns(0));
dt2 = dt << subset(rows(dt<<get rows Where(ascolumn(dtcol1) != dtcol1[1])), invisible, selected columns(0));
// Get all continuous data columns
colList = dt << get column names( numeric, continuous );
// Loop across all columns and find those with no variance
For( i = N Items( collist ), i >= 1, i--,
If( (Col Std Dev( Column( dt1, colList[i] ) ) ) != 0 & Col Std Dev( Column( dt2, colList[i] ) ) != 0,
colList = Remove( colList, i, 1 )
)
);
// Delete the columns with no variance
dt<<delete columns(colList);
close(dt1, nosave);
close(dt2, nosave);