Hi @dlehman1 ,
Thanks for posing an interesting conceptual question. In general, I agree with what everyone has commented on so far -- that it's important to define what "validation" means in this case. However, I would say that validation can take on both of those meanings -- a hold out set to improve model prediction as well as generating new samples that test the predictive capability of the model.
To me, both are important and need to be done -- one for improving the model, and the other to make sure the model can work, especially when generating new samples that are near the boundaries of the parameter space where models tend to be less accurate.
As to why published works tend not to use validation is to a large extent what you and @Jed_Campbell commented on: historical practice is slow to change -- but this is more of a result of influences within the scientific community that push toward finding a significant result rather than a robust model. My background is physics, and physicists are notorious for creating simple models for one situation and then extrapolating that out to other situations. Take the simple pendulum as an example. This is the basis for almost all basic physics problems, yet in practice it isn't the greatest model -- there have been so many tweaks and changes to it in order to have the model work in other non-ideal situations (think about having to add the electron spin into the orbital mechanics of the electronic states of an atom).
To me, this simple pendulum doesn't make a very robust model -- but it does satisfy the interest and desire to "find" something significant in the data. Sure, it's a great start, and we all need to start somewhere. However, if it comes to wanting to generate a broader, more robust model that can be utilized in more generalized areas, validation is required in both definitions that were discussed.
More "mundane" results might ultimately lead to more robust, better models and predictive capabilities in science, but cultural changes in the scientific community need to change, and a de-emphasizing of the splashy, flashy new findings probably needs to happen.
In short, I would say that not only is historical practice slow to change, but there also needs to be a cultural change in the scientific community about what/how results should be recognized in science. A sort of reorienting of priorities.
DS