New Table("Final", New Column( "means", set values( collect_means )), New Column( "stddevs", set values( collect_stdevs )), //New Column("Group Std",Formula(col std dev(:means))) ); //pt << Summary( Group, Std Dev( :means ) ); pt=Data Table("Final")<< summarize(p=std dev(:means)); Insert Into(collect_group_std,p,0); //close("Final",nosave); k=k+5; );
New Table("Groups",New Column( "Summary Group", set values( collect_group_std )));
The "Final" sheet in the above script has to be updated everytime when the outer For loop runs. But now, the "FInal Sheet' is not getting updated and using the previous "Final" sheet (using the sheet from the previous iteration.) Can some one help so that "Final" sheet is updated with new values everytime.
The summarize function should not be used as a message. When working with several tables that has identical column namesit also a good idea to to explicitly include the table in context (e.d dt:Data instead of just : Data). And although I am not sure of exactly what you want to achieve here, shouldnt the lists collect_means and collect_stdevs be reset for each outer for-loop?
Below is your code with some minor changes.
dt =Data Table("Sample Data");
For( l =1, l <=2, l++,
Show( k );
For( i =1, i <=3, i++,//3 times
nt = dt << Subset(
Sample Size( k ),//samples of 5
Summarize( m =Mean( nt:Data ), s =Std Dev( nt:Data ));
Insert Into( collect_means, m,0);
Insert Into( collect_stdevs, s,0);
//Close( nt, nosave );
pt =New Table("Final",
New Column("means", set values( collect_means )),
New Column("stddevs", set values( collect_stdevs )),
Summarize( p =Std Dev( pt:means ));
Insert Into( collect_group_std, p,0);
//Close( pt, nosave );
k = k +5;
New Table("Groups",New Column("Summary Group", set values( collect_group_std )));
Btw, using lists and a lot of intermediary tables is not very efficient. Here all data are numeric and the subsets and the Final tabler seem not to be of interest other than acting as temporary holders for data. In this case I would suggest using matrices instead which are much more efficient, especially for large data sets.
Try this and compare the performance with the above script: