I'm preparing to test my total station for cyclic and systematic errors, and note that the instrument software (even without a DC) has the capability of taking more than one distance measurement with the EDM. See here:
I believe other instruments may be able to do this too.
If I am going to take multiple redundant observations of distance, is it better to set this to "one", then put each and every observation into LSA? Or let the instrument "do its thing" for each of 2, 3, or 4 observations? What is it doing? Averaging them? Something more? What's better from an LSA point of view: 9 individual observations, or 3 "averaged" observations?
> I'm preparing to test my total station for cyclic and systematic errors, and note that the instrument software (even without a DC) has the capability of taking more than one distance measurement with the EDM. See here:
>
>
>
> If I am going to take multiple redundant observations of distance, is it better to set this to "one", then put each and every observation into LSA? Or let the instrument "do its thing" for each of 2, 3, or 4 observations? What is it doing? Averaging them? Something more? What's better from an LSA point of view: 9 individual observations, or 3 "averaged" observations?
The instrument is simply displaying the range as the mean of however many ranges you instructed it to take. Probably the best way to determine where the point of diminishing returns is would be to set the number of measurements to 1, log a series of twenty measurements to the identical prism, and figure out how many measurements would have given essentially the same answer as the mean of all twenty.
That is done by computing the standard error, s, of the series of 20 and computing the number of observations, n, such that s / SQRT(n) = acceptably small number.
The whole point of testing an instrument is to figure out how well it will perform when used in the way in which you intend to. That will mean using the means of several ranges as the distance measurents entered into any Star*Net adjustment since the individual measurements forming the mean all have the identical centering errors.
> Probably the best way to determine where the point of diminishing returns is would be to set the number of measurements to 1, log a series of twenty measurements to the identical prism, and figure out how many measurements would have given essentially the same answer as the mean of all twenty.
Perfect. On the list. Thank you.
I usually remain on the sidelines of such discussions. I had one statistics course in college and the main thing I learned was that I would never really understand statistical calculations.
That said, a single reported distance from a total station as I understand it is already an average. Or to put that another way, I understand that when you initiate a distance measurement, your total station actually measures the distance numerous times, looking for internal agreement among the distances returned sufficient to meet the instrument's preset acceptance criteria.
For your purposes, I do not see that the average of 3 shots or whatever number you choose serves your purposes in any meaningful way. I would rather have 9 measurements than 3 of 3 averaged.
> For your purposes, I do not see that the average of 3 shots or whatever number you choose serves your purposes in any meaningful way. I would rather have 9 measurements than 3 of 3 averaged.
We'll find out tomorrow.:-)
FYI, when I started this journey, I wanted to learn surveying. Little did I know what I was in for.:-O
> That said, a single reported distance from a total station as I understand it is already an average. Or to put that another way, I understand that when you initiate a distance measurement, your total station actually measures the distance numerous times, looking for internal agreement among the distances returned sufficient to meet the instrument's preset acceptance criteria.
>
> For your purposes, I do not see that the average of 3 shots or whatever number you choose serves your purposes in any meaningful way.
It all depends upon how much variation there is between repeated range measurements to a fixed prism. If repeated range measurements do vary slightly (as they tend to do), but the variation is significantly less than other errors inherent in the measurement, such as target centering errors, then, yes, the improvement realized by averaging would probably tend to be quite minimal.
I did the math; now, what does it mean?
I took the process for a dry run (in my living room...about 20' distance).
I know that this would probably be very different at 100' or more, but don't have that kind of room.
I decided to use both faces, not thinking there'd be any difference, but there is, and it seems significant. So, question #1 is: why is that? Isn't the EDM symmetrical about the horizontal axis of the instrument?
But beyond that, I'm not sure what to do next. The Values of the EDM measuring 20 times itself are:
Direct: 5.9704343
Reverse: 5.9698247
I did the math; now, what does it mean?
> I took the process for a dry run (in my living room...about 20' distance).
> I know that this would probably be very different at 100' or more, but don't have that kind of room.
Your instrument displays ranges to the nearest 0.01mm? Why does this seem unlikely?
In the first place, you should have logged the *slope distances*. Those were the ranges that were actually measured. The horizontal distances are computed results that have a separate error component from the zenith angles. Horizontal distances are only measured directly when the instrument just happens to be oriented horizontally.
Blunder Detect!
> > I took the process for a dry run (in my living room...about 20' distance).
> > I know that this would probably be very different at 100' or more, but don't have that kind of room.
>
> Your instrument displays ranges to the nearest 0.01mm? Why does this seem unlikely?
>
Something's definitely wrong. I checked the raw data file...distances are in feet...like 19.588, 19.590, 19.586 etc...so the deviations are clearly larger than those shown in the spreadsheet above. When I brought them into Topcon Link, they showed up in Meters, So that may be one source of error. My other thought is distance. I'll try again at 30 meters.
Thanks for the tip on SD vs. HD. I was wondering about that.
Good thing it was only a dry run, lol.
Blunder Detect!
> Something's definitely wrong. I checked the raw data file...distances are in feet...like 19.588, 19.590, 19.586 etc...so the deviations are clearly larger than those shown in the spreadsheet above.
Well, the slope ranges are the information you need to do the calculation. You can compare the F1 and F2 slope ranges to verify that they don't significantly differ on average.
Just post the actual data since your question was how to analyze it. There's no need to measure a longer distance when the real subject is the method of analysis.
It looks like you need to fix some programming errors in your spreadsheet, too, to actually compute the correct standard error of a series of n values.
Blunder Detect!
> Just post the actual data since your question was how to analyze it. There's no need to measure a longer distance when the real subject is the method of analysis.
Good point. I thought the greater the distance, the smaller the errors would be (relative to the distance), but I see your point, now that I think of it.
>
> It looks like you need to fix some programming errors in your spreadsheet, too, to actually compute the correct standard error of a series of n values. Yes. I haven't completed inverting the formulas to give me n. I thought I'd start with just getting the standard deviations and standard error calculated. Baby steps.
Haven't tried uploading an .rw5 or other data file; not sure you can; but in any case, here are the data, cut and paste directly from Topcon Link:
Direct
# Slope Distance (f)
1 19.588
2 19.586
3 19.589
4 19.591
5 19.588
6 19.586
7 19.587
8 19.588
9 19.586
10 19.588
11 19.588
12 19.588
13 19.589
14 19.588
15 19.588
16 19.587
17 19.587
18 19.588
19 19.586
20 19.586
Mean: 19.5876
Instrument internal Average of 20 Observations: 19.588
Reverse
1 19.583
2 19.587
3 19.586
4 19.585
5 19.586
6 19.587
7 19.586
8 19.587
9 19.586
10 19.586
11 19.587
12 19.587
13 19.585
14 19.586
15 19.584
16 19.587
17 19.585
18 19.587
19 19.586
20 19.586
Mean: 19.58595
Instrument Internal Average of 20 observations: 19.586
Blunder Detect!
So, what are the standard errors of those two series of slope ranges?
Blunder Detect!
> So, what are the standard errors of those two series of slope ranges?
I get .000285' for the Direct,
and .000246' for the Reverse.
But that's still less than one tenth of a mm, How is that possible?
I'm doing something wrong.
I'm going to put these into Starnet and see what it says...
Here's another set.
# Slope Distance (USft)
1 100.00200
2 100.00200
3 100.00000
4 100.00100
5 100.00100
6 100.00300
7 100.00200
8 100.00200
9 100.00200
10 100.00100
11 100.00100
12 100.00200
13 100.00200
14 100.00100
15 100.00100
16 100.00300
17 100.00200
18 100.00100
19 100.00100
20 100.00100
Average: 100.00155', or .4724mm
Errors still seem very small.
Could this be because this is indoors; controlled conditions (temp, pressure), no atmospheric effects, etc.?
I'm going to try putting a series like this into Starnet and see if the Standard Errors match what I'm calculating by hand.
Blunder Detect!
> > So, what are the standard errors of those two series of slope ranges?
>
> I get .000285' for the Direct,
> and .000246' for the Reverse.
>
> But that's still less than one tenth of a mm, How is that possible?
> I'm doing something wrong.
I'm afraid you've gotten the wrong answer. Why not just use the statistical function of a pocket calculator? The answers are:
s = 0.0013 ft. for the "Direct" series
s = 0.0011 ft. for the "Reverse" series
I'm going to put these into Starnet and see what it says...
> # Slope Distance (USft)
> 1 100.00200
> 2 100.00200
> 3 100.00000
> 4 100.00100
> 5 100.00100
> 6 100.00300
> 7 100.00200
> 8 100.00200
> 9 100.00200
> 10 100.00100
> 11 100.00100
> 12 100.00200
> 13 100.00200
> 14 100.00100
> 15 100.00100
> 16 100.00300
> 17 100.00200
> 18 100.00100
> 19 100.00100
> 20 100.00100
> Average: 100.00155', or .4724mm
> Errors still seem very small.
Just keep in mind that the deviations from the mean would only be part of the range errors. They are not the measure of the actual error in the range.
Blunder Detect!
> > > So, what are the standard errors of those two series of slope ranges?
> >
> > I get .000285' for the Direct,
> > and .000246' for the Reverse.
> >
> > But that's still less than one tenth of a mm, How is that possible?
> > I'm doing something wrong.
>
> I'm afraid you've gotten the wrong answer. Why not just use the statistical function of a pocket calculator? The answers are:
>
> s = 0.0013 ft. for the "Direct" series
>
> s = 0.0011 ft. for the "Reverse" series
So, taking s = 0.0012 ft. as the pooled apparent standard error of the two series of range measurements, the question is whether taking the range measurement as the mean of three measurements would accomplish a significant improvement.
The mean of three would show an apparent standard error of 0.0012 ft. / SQRT(3) = 0.0007 ft.
When you consider that the actual distance measurement has a a real standard error that results from:
instrument centering error + target centering error + instrument standard error
and those errors are assumed to be randomly distributed, accumulating in a root-sum-of-squares sense, so
SQRT [ (s.e.1)^2 + (s.e.2)^2 + (s.e.3)^2) ]
where :
s.e.1 = instrument centering error = 0.001 to 0.003 ft.
s.e.2 = target centering error = 0.002 to 0.010 ft.
s.e.3 = instrument range error = 0.003 to 0.006 ft.
So, for the best case (small end of the range of expected values of standard errors of centering and instrument range) the net contribution of all three would be:
SQRT [ 0.001^2 + 0.002^2 + 0.003^2 ] = 0.0037 ft.
Improving the apparent standard error of the range measurement by taking a series of 100 measurements would be to reduce it from 0.0012 ft. to 0.00012 ft.
SQRT [ 0.0037^2 - 0.0012^2 ] = 0.0035 ft.
In other words, the net improvement from even 100 repeat measurements to the same prism would be virtually non-existent.
Blunder Detect!
For the direct series, 0.0013 (0.0012732 to 7 decimal places) is the sample standard deviation of the 20 observations, not the standard error. The standard error of the mean of 20 observations is 0.0002846 truncated, which agrees with rfc's calculation.
Your calculation of the standard error for 3 observations is correct. Note that it is a larger number than rfc's which demonstrates the superiority of the mean of a larger number of observations.
The standard error of one observation is 0.0013, for the mean of 20 observations, 0.000285, and for the mean of 3 observations, 0.0007.
Blunder Detect!
> I'm afraid you've gotten the wrong answer. Why not just use the statistical function of a pocket calculator?
Uhh...Because I like doing things the hard way? (At least once, anyway, so I understand the math).:-)
The answers are:
>
> s = 0.0013 ft. for the "Direct" series
>
> s = 0.0011 ft. for the "Reverse" series
In this case, I think I had the right answer, then went on to use it to get the wrong answer.
As Math Teacher points out, the Standard Error of the Mean, is not the same as the Standard Error of any individual observation.
The lines just above my "answer"...What I was calling Standard Deviation, is actually the Standard Error of any individual observation (.0013' and .0011').
I went on and (erroneously), divided those by the SQRT(20), which gave me the Standard Error of the Mean.
Superior but Useless
> Your calculation of the standard error for 3 observations is correct. Note that it is a larger number than rfc's which demonstrates the superiority of the mean of a larger number of observations.
Superior, but (as Kent points out), useless. I think I'm satisfied that given that I will be turning at least 3 sets of D+R angles (still TBD) for my upcoming traverses, and that I'll be recording distance for each of those observations, I can sleep at night, setting the instrument to record a single distance observation for each angle observation.