Hi @wsande,
I agree with @gzmorgan0 and @P_Bartell's suggestions, but let me tackle a few of your specific questions:
"The StdDev of the 15mAvg values is 0.00513... that would imply a population StdDev of 0.1538 (= 0.00513 * 30). However the total range of my data over the 80+ hours is only 0.05 Max(15mMax) - Min(15mMin). I would have expected the range to be several SD's not a fraction of an SD."
I can see how that might seem strange, but what you have is the range of the means of samples of size 900, not the range of the original data. if you look in my example before, the range of my means of size 900 was around 20. The range of the original data was over 1000. By collecting observations together and calculating a mean, the resulting numbers, each of them a mean of 900 observations, tend to be close to one another than the original data points are close to one another (and that's why we can state that the standard deviation of sample means will be smaller than the standard deviation of the original data by a factor of sqroot(n)). So, back to your case, the range of your means is 0.05, which given samples of 900 we know is substantially smaller than the range of the original data. Thus, it shouldn't be surprising that the standard deviation we estimate for the original generating data is larger.
"I wonder if I'm running into a "quantization problem. The resolution of my samples is 0.01, which is 2x the SD that I measuring on the sample averages"
Given what we know from the sample means here, we should expect that the range of the data in the original 1sec samples is larger, so hopefully, you're not in a situation where your measurement device isn't capable of accurately measuring the phenomenon of interest. I can't really speak to that, though.
"I don't know if that's somehow skewing the results. I actually have > 200 of these data sets, for different "channels", and typically the Range is 3x to 5x the StdDev of the sample means."
I agree that we should expect that the range of the sample means is larger than the standard deviation of the sample means. But, that's even true in the first plot you showed, Range = 0.05, and StDev = 0.0051. However, with samples as large as 900, we shouldn't expect that the range of the sample means is larger than the standard deviation of the original variable from which the samples are drawn.
"One thing I'm wondering about... If instead of 1-second samples, the underlying population was 10-second samples, then there would only be 90 samples per 15-minute interval, and the calculation of the population SD would change by a factor of 3. ( SQRT(90) instead of SQRT(900) ). Yet the underlying continuous process would not have changed. I'm having a hard time wrapping my head around this..."
Fantastic question! It seems perplexing at first, but let me hopefully explain this another way. If in the original data you had data collected every 10 seconds, rather than every 1 second, the sample means in your final table would be built out of samples of size 90 not 900. The standard deviation of the original data would not change, but the standard deviation of your sample means would change. Let's use some real numbers to clarify this:
Take my original example, a population with a standard deviation of 120. When we had samples of 900, we found that the standard deviation of the distribution of those sample means was 4. We did this empirically, but could also have just applied the formula sigma / root(n), 120/30 = 4. If we had only the means of samples of size 900, and found the standard deviation of those means, we could have worked backward, sigma*root(n), 4*30 = 120.
What about the case with data collected every 10 seconds and then averaged? Well, as you said, you'd have 90 observations for each mean. This implies that the standard deviation of your samples of 90 observations would be 120 / sqroot(90) = 12.65. Notice this standard deviation is larger than when we had means of samples of size 900. With fewer observations in each mean, the standard deviation of the means is larger -- they vary more from sample to sample because there are fewer observations balancing things out (law of large numbers in action, aka the consistency of the estimator, it gets better with more data). So, let's work the other way. If that was the kind of data you had access to, means of samples of size 90 from observations taken every 10 seconds, the standard deviation you'd have to work with would be 12.65, and you'd take 12.65*sqroot(90) = 120. So, nothing about the population is changing... what's changed is the process through which you've measured nature and the consequence of that process on the variability of the observations you have access to.
All said though, this is all assuming independent and identically distributed observations. If there are systematic effects/autocorrelation/etc as @gzmorgan0 mentioned, these simple formulas no longer adequately describe what happens when we take those means.
I hope this clarifies a few things!
@julian