My first thought would be that if you could find a transformation of your data that would normalise it you'd make life a lot easier for yourself, not least because you'd still be able to answer questions about the data (like how much more defoliated the trees are at station A then station B) that you'll have difficulty answering if you start taking the ranks of it. As a separate point though, it seems to me that if you feed the ranks into a regular two-way ANOVA, you're still not really retaining the assumption of independent Normally-distributed residuals with constant variance: you're just making it less easy to demonstrate that they're not (and actually you know they're not if you're analysing ranks).
One transformation you might consider for your data is an arcsine square root transformation, which is the one usually recommended when your data is a proportion (see for example http://udel.edu/~mcdonald/stattransform.html ). This one goes some way towards normalising variances towards the extreme ends of the scale (i.e. 0% and 100%), though even there there's not much you can do to normalise the very extreme ends.
If your data includes large numbers of instances of almost complete defoliation or almost no defoliation at all, you might consider analysing a binary variable (i.e. "defoliated" / "not defoliated") or an ordinal variable (1="not defoliated", 2="partially defoliated", 3="wholly defoliated") using a logistic regression (see for example http://udel.edu/~mcdonald/statlogistic.html ), which would still enable you to fit explanatory variables to your data like station, type of tree, age of tree, height of tree etc. Logistic regression is provided within the "Fit Model" platform: just change the personality to whatever type of logistic regression you intend to perform, and make sure that the modelling type of your response variable is set to "Nominal" or "Ordinal" (otherwise the two logistic regression options will be greyed out).