I am trying to go through a data file (dt1) that has about 10,000 rows. I have a FOR loop that goes from 1 to the NROW(dt1). It appeared to be hanging so I put in a bunch of SHOW statements to see what was happening. After processing about 2000 records, no more show statements were being displayed in the log. I decided to wait, and after about 10 minutes, the script finished and the rest of the SHOW statements were blasted into the log.
Does anyone know why after about 6000 show statements, JMP looks like it isn't doing anything? Is there a parameter I can set to keep the SHOW statements going?
For fast executed iterations the log window may not manage to update continuously. You can insert Wait(0) after each Show() to have a smooth update of the log. However, both the Show() and the Wait() commands will slow down the execution of course.
Ten minutes is a long time though. Hard to tell what may be slowing things down without more information about the type of calculations. You can try to increase available RAM or, if possible, write more efficient code to increase performance.
Can you give us an inkling as to the calculations you're doing? That may help us solve the riddle.
I had a case recently where I was looping through a table and building several lists from the data. It appeared that larger tables increased processing time exponentially. I finally figured out (from a previous post) that JMP goes infinitely faster if you insert new values into a list at the beginning, rather than at the end. Processing a 21,000-row table went from 34 minutes to 15 seconds!
I had to step away from this to address some other issues... but I'm back trying to figure out what is happening...
I'm trying to loop through over 10,000 records in a JMP data file. There are several if statements that are using the LENGTH and INDEX functions. There are a couple of bean counting variables.. nothing complicated!
out of frustration, I altered the parameters for the FOR loop, to look at rows 10585 to 10590 only. Never got any of the show statements to display in the log, and after 5 minutes, i killed the job.
Can you give me the link to the append that you referenced about exponentially long execution times for large files?
When you use a construction such as a For loop from 1 to NRows the execution time does not scale linearly with the number of rows. For a large number of rows it becomes much more efficient to use the implicit row reference: For Each Row