Hi,
I realize I am late to this. I decided to respond because I had a similar need back when I worked in semi, and my data table was so large that even a single loop with a break did not work for me. If your tables are that large, then see if the following works for you. It is what I had to do. In the common case, when you have many timestamps during a single process step, it will save MUCH time. This is for 2 reasons: first, there are no comparisons. Second, it only iterates once for each process step, setting table rows to values in bulk. It does this by concatenating the recipe timestamp data directly onto the original table, working with the two together, and then removing the recipe data.
Names Default To Here( 1 );
//sample data table
dt1 = As Table( (1 :: 30)` || J( 30, 1, Random Uniform() ), <<column names( {"Time", "value"} ) );
//sample recipe lookup table
dt2 = As Table( (1 :: 26 :: 5)` || (5 :: 30 :: 5)`, <<column names( {"Begin", "End"} ) );
dt2 << New Column( "PO", character, <<set values( Words( "abcdef", "" ) ) );
dt2 << New Column( "recipe", character, <<set values( Words( "lmnopq", "" ) ) );
/////////////////////////////processing begins here
//rename the beginning timestamp to match the timestamp name in the other table
dt2:begin << set name( "Time" );
dt1 << concatenate( dt2, appendToFirstTable );
dt1 << sort( by( :time, :end ), order( ascending, descending ), replace table );
// get the rows where you change from timestamp band to timestamp band
indices = Loc( dt1:end << get values ) |/ (N Row( dt1 ) + 1);
// for each segment, set the values for OP and recipe to the value of the first row in the segment
For Each( { { beg, end }, i },
across( indices[1 :: (N Row( indices ) - 1)], indices[2 :: N Row( indices )] - 1 ),
dt1:PO[ beg :: end ] = dt1:PO[ beg ];
dt1:recipe[ beg :: end ] = dt1:recipe[ beg ];
);
// clean up
dt1 << select where( !Is Missing( :end ) );
dt1 << delete rows;
dt1 << delete columns( "end" );
If you happen to see this, let me know how it goes.
Cheers,
Brady