Hi Dale.
Yes, it's always nice to have as much info as you can about how the data was generated, and certainly wouldn't pass on looking at that if available. But it's not necessary to know those details in order to do a formal analysis that can estimate how the factor(s) being studied affects both the central tendency and variation in the attribute being measured, as in Hector's example
For many analyses, we often take an individual data value to be just that. However, in most engineering applications, that single data value is typically some kind of summary statistics of a measurement system that may be taking many measurements (possibly not even available to see) in order to produce that one value. Example: think of some kind of optical measurement system scanning a substrate making hundreds of light reflectance measurements. The system may spit out a few number (the min and max reflectance, or different percentiles, the mean or the median, or some kind of measure of variation, etc.). And we go forward and run analyses on any of those, treating one set of summary statistics as one experimental unit (n=1). We just have to keep in mind what characteristic that the values we're using for our analyses is measuring. If we measure a second piece of substrate that received a treatment (i.e., another experimental unit), then we would treat it as n=2.
If that measurement system instead made a million light reflectance measurements on each of those substrates in order to come up with that set of summary statistics. Our analysis would still treat it as n=2.
Again... still get your point that having those behind the scene details would be nice, Hector can still do one-way ANOVA or regression knowing his analyses are tracking two components (central tendency and variation) in whatever it is is being measured.