You can have a statistically significant model that does a poor job of prediction. Statistical significance is telling you that those significant terms HELP to explain the response, but does not guarantee a good fit.
For example, with a continuous response, look at this picture:
The model is statistically significant, but clearly would not predict well.
Determining why the model does not fit well can be difficult with a nominal logistic regression. A few things that can help you determine predictive ability rather than "eyeballing" the table.
* You can ask for the ROC curve. The Area Under the Curve (AUC) is an indication of predictive ability. An AUC of 0.5 is the "baseline". This is like using a coin to determine if the tree will live or not. Anything above 0.5 starts providing evidence of predictive ability. AUC above 0.7 is starting to get to decent predictive ability. Above 0.9 is fantastic.
* You can also ask for the confusion matrix. This is a table of observed results versus predicted results. This table is essentially what you were considering by saving the prediction formula. This can help you determine where your model is starting to have trouble. Is it having trouble classifying trees that are dead? Or is it having trouble with just trees that are alive? Or both?
Don't forget to assess the data that the model was built on. How "balanced" is the data between live and dead trees? For example, if there are only 1% of the trees being dead, a great predictive model would be to say that all trees are alive. 99% accuracy! Not very helpful though. For this reason having a response that is pretty close to balanced can be helpful.
Was the model only containing main effects? Would interactions help? What about quadratic terms? Is the data "rich" enough to support a model with these higher order terms (this is a great question if those higher-order terms are already in the model, too!).
You could also try a different modeling technique. Perhaps a Partition or tree model would be a good thing to try. There are other tools, too.
I'm sure others can add more things to help, but this is where your fun REALLY starts! Best of luck!
Dan Obermiller