Thank you for your response. We will indeed revisit the documentation that you referenced. And ultimately like you said, we may just need to embrace this 'algorithmic variability’.
By way of explanation, the desire to force reproducible results, while not appropriate in a professional setting, is highly desirable in an academic setting. This is a group project for a graduate school class. Unfortunately, the team members all reside in different states and are trying to work in concert with each other to develop predictive models. For Neural Nets, we are all using the same random seed with the Random Seed Add-In that, per our professor, was designed by Mia Stephens Academic Ambassador with JMP for the purposes of being used in the academic arena. It seems strange that the same person running the same model with the same random seed, will get such different responses from one day to the next (and always worse results). When run in the same day, the model reliably produce the same test misclassification rate. The only change, other than shutting the computer down, was to re-order the columns to prepare the file for an additional dump of data. In addition, other team members can’t reproduce the results running that same model.
This has forced us to investigate if there is a problem with the model, or if it is attributable to randomness, which we thought was eliminated via the Random.
Again, thank you for your response earlier.