Thanks Don.
There seems to be some mixed opinions on this, but I frankly see the math playing out the same for both situations. It also sounds like there are texts that delve into this subject. Because I think this question is important (and interesting), I want to also post a response I got from a collegue of mine (PhD statistics). It's paraphrased:
...We quite frequently run into very similar situations. I do indeed think it is appropriate to use the classic reliability tools for this sort of situation: mathematically, I do not see much difference between the concept of "time-to-event" and "force-to-event." In both cases, there is a (mostly) continuous measurement that leads either to a failure or to a censored observation. Survival/Reliability analysis can be more abstractly characterized as "exposure-to-event." Meeker and Escobar's canonical text on the subject delves into the idea of degradation models, and the way in which degradation models are related to the simpler time-to-event analysis. In short, yes, I quite frequently execute the sort of analysis you are describing. Of course, the analyst should check distributional fit and all the other modeling assumptions, to ensure that the specific problem is a good match for a survival analysis or reliability demonstration.
Jim Pappas