I am working on a neural net model that I have bagged to create a mean and standard error prediction column. I would like to export this to Python so that a customer could use it more easliy. When I generate the scoring code it produces the mean of all of the bagged estimates. Is there a way that I could generate the scoring code that would go back to the original inputs of the neural net instead of the bagged estimates columns?
Not sure what you mean by going back to the original inputs - the whole point of deploying the model is to be able to handle new inputs in your production pipeline.
Do you mean, to be able to run the bagging algorithm again and fit new "bags" based on the new data, and then getting the mean of the new models?
Also, I noticed that there was an error doing the code generation, pointing to the lack of support for the operator "Mean". I will look into adding support for it in a future version, but in the mean time you can add support for that by implementing the placeholder method generated in its place.
Hmm, it just occurred to me that we might be looking at different things. Which JMP version are you using? If you are on 13.1 or 13.0, you won't be getting the bagged formula dependencies, just the top level of the formula. Transitive dependency formula resolution came only in 13.2.
Also, the use of an ensemble of Neural Networks is likely to generate very large models. If code size becomes an issue, look forward to 14.1, where you will be able to use matrix-based Neural Networks (aka "fast formulas") which are both faster and smaller.