cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Choose Language Hide Translation Bar
abmayfield
Level VI

"not responding" vs. "still thinking"

I will sometimes set up some really big jobs on JMP Pro 16 on my Macbook Pro (32 GB of RAM), and after a few minutes, my Mac will tell me "Application not responding," which is confirmed by the Task Manager. However, many times, if the particular platform I am using has a status/progress bar, I will see it advancing (however slowly). Is it ever possible to know if the script is still running vs. whether the application has really "given up" in which case I'd need to relaunch JMP?

 

On this same note, it is strange that I cannot effectively "shuttle" more RAM or CPU to JMP Pro. Right now, I have a hung application for a huge job (a PLS analysis with over 100,000 predictors), but it is only using 4 GB of RAM, and I've got plenty to spare. I feel like there was once a way to go in and change the RAM allotment. What's the point of all this RAM when my Mac only ever uses 5-10 GB of it!? (sorry, this might be more of a Mac issue than a JMP one). 

Anderson B. Mayfield
3 REPLIES 3
ih
Super User (Alumni) ih
Super User (Alumni)

Re: "not responding" vs. "still thinking"

Hi @abmayfield ,

 

Can you verify that you have the 64 bit version of JMP installed?

 

As to monitoring scripts: here are a few ideas. I've used each of them in different situations:

 

  1. Create a window where you periodically write status messages, for example every 10000 iterations of a for loop you could write a value to the text box and then include '<<bring window to front' and 'wait(0)' after updating the text to tell JMP's GUI to update so you are sure to see the window. Don't do this too often or it will slow your script down.
  2. Write to a log file that you can monitor with a text editor. You can also read from a text file to decide whether to keep going, so every once in a while open a file and check whether it contains 'stop'.  Again, don't do this too often.
  3. For very complex scripts you can even use a separate instance of JMP to monitor progress and even interact with your script, see Socket communication to get and log messages from JMP, R, and python scripts .  One copy of JMP (a client) can do the work and send messages to a another instance of JMP (a server).   You could then monitor the process on the server, or the client could ask the server each time whether it should keep going, or shut down gracefully if you want to stop early.  This can even work with multiple clients on different computers working on separate parts of an analysis.
pmroz
Super User

Re: "not responding" vs. "still thinking"

There's also the caption command:

n = 10000000;
for (i = 1, i <= n, i++,
	x = i^2 + i;
	if (mod(i, 10000) == 0,
		caption("Iteration # " || char(i));
		wait(0);
	);
);
caption(remove);
wait(0);
abmayfield
Level VI

Re: "not responding" vs. "still thinking"

Thank you both. I have 64-bit JMP installed. I have seen this sort of nifty update caption box for certain plug-ins I use, and it's definitely very useful for large jobs. I have JMP Pro 16 and the EA3 for JMP Pro 17. I wonder if they could be linked to where progress in one is reflected in the other?

 

My issue seems to be somewhat platform (and problem)-specific and speaks to the need for response variable reduction (surely a HUGE area of research and interest). If I run a very large job under generalized regression, it will give me a progress bar. For partial least squares, it will SOMETIMES (but not always) show me a progress message of "cross-validating to find the best dimension" but not all the time. I think part of this is my fault, since I gave JMP Pro a somewhat impractical task as a sort of thought exercise: 1-16 Y's (i.e., not too many) vs. 500,000 X's (third-order factorial of 96 predictors). Some of these third order terms DO make sense to test, but certainly not all 500,000. But of course manually deleting 100,000 terms would be impractical, so I may play around with trying to reduce the model complexity using DOE or maybe randomly selecting random subsets of the 96 predictors to test in higher-order factorials. I'm also considering using Gen-Reg's "best-subsample" or even predictor screen to weed out the uninformative terms, but then I worry that I might exclude a term that is not useful by itself but IS useful when interacting with another term. Based on Clay Barker's excellent webinar yesterday, it seems like pruned forward selection and several other Gen-Reg estimation methods actually do something like this, where they consider the interaction of a new term with a previous term, keeping it if it lowers the AICc (or whichever response metric you use) and excluding it if not, so maybe simply putting in the 96 predictors (with no interactions or quadratics) into Gen-Reg, using pruned forward selection, identify the optimal suite of predictors (which will ideally be <96), then test 2nd or 3rd-order factorials of those in PLS or Gen-Reg. I'm going to try this today. 

Anderson B. Mayfield