I Disagree Bill
> For a normal distribution of errors, considerably more observations at each point, not redundant observations, need to be made.
>
It may not be obvious, but "normal distribution" is a term referring to a specific pattern of errors, the Gaussian distribution, the histogram of which will tend to be mound-shaped and clustering around zero.
All a whole bunch of repeat observations do (when all contain errors drawn from the same normal distribution) is give a mean with ... an error drawn from a different normal distribution, with a smaller standard error. It hardly eliminates random errors or the normal distribution of those errors.
The Mean Of A Small Set Of Angles Is Not The True Angle
It may be close for some, but not all purposes. The mean may be the true angle +/-. This also applies to distances. That is the reason least squares is used in the first place.
The mean of a larger set of angles or distances has a greater probability of being closer to the truth.
Paul in PA
The Mean Of A Small Set Of Angles Is Not The True Angle
we never know the true angle or distance.
Ah, the McGrath fallacy
> The mean of a larger set of angles or distances has a greater probability of being closer to the truth.
Actually, the "true" value of any measured physical quantity such as angles and length is necessarily unknown. At best, one makes a very good estimate of the quantity with a sufficiently small uncertainty as to be useful for some purpose. Speaking of "the truth" in absolute terms is unsophisticated in the context of modern measurement science.
While it is true that the mean of a series of n measurements of a quantity with random, normally distributed errors with standard error = s will be expected to have a standard error of s/SQRT(n) that doesn't mean that relying upon repeated measurements to reduce uncertainty in the mean is a highly desireable procedure, particularly when a standard error of s is an acceptable uncertainty.
The Mean Of A Small Set Of Angles Is Not The True Angle
> we never know the true angle or distance.
you must know Dr. Ben Buckner....;-)
> We approach zero error or ultimate precision as we approach infinity. This is illustrated by the relationship between the error in a single value and the error in the mean of a large number of repetitions of a quantity. The error in the mean of a set of observations in a measuring procedure is equal to the standard deviation for the procedure divided by the square root of the number of observations in the set. Perfection is found at infinity because the square root of infinity is still infinity and anything divided by infinity is zero. No matter how many times I repeat an angle, I cannot resolve it to zero error because I must operate with a finite number of observations. -
See more at: www.profsurv.com
B-)
I Disagree Bill
You can observe a point with OPUS/CORS for a whole day. Send it away and get a good number for the observation. Get a nice least squares adjustment and feel really good about it.
But all you located was an antenna, there is nothing to "prove" that you located any control/property point only that you have an antenna position: but if you set up over it again, with a different antenna, different tribrach, different HI to a different measure point on the antenna, then you have something.
Same thing with a traverse, work out all the error possibilities and you have good numbers even if the least squares aren't as good as a single observation. The more you cross tie and add redundant measurements, the better.
Bill
I'm not really disagreeing with you 😉
I just rather have lots of checks that aren't perfect than one set of almost perfect numbers
To Return to the Topic, Though
> You can observe a point with OPUS/CORS for a whole day. Send it away and get a good number for the observation. Get a nice least squares adjustment and feel really good about it.
>
> But all you located was an antenna, there is nothing to "prove" that you located any control/property point only that you have an antenna position: but if you set up over it again, with a different antenna, different tribrach, different HI to a different measure point on the antenna, then you have something.
If you're arguing that you can't expect improperly adjusted equipment to perform as well as that in good adjustment, who would argue that? However, it still has nothing to do with the central topic which was whether the 95%-confidence error ellipse should be computed using a different distribution as the degrees of freedom in the observations from which the point position at the center of the ellipse is derived increases.
So We Should Accept The 95% Error From A Crappy Adjustment
of a crappy network, as equal to the 95% error of a well planned network?
What we do in surveying is to survey networks. A 4 sided lot is a simple network, but it is interdependent on 8 surrounding lots, each a network to itself. Did we in fact survey all those networks?
A closed traverse is a minimal network, Wolf & Ghilani give it's LS solution considerably less weight than an LS solution of a true network.
Least squares was developed specifically to adjust true networks. A true network has a significant number of redundancies.
We have multiple methods to add those redundancies. We can use traditional methods to interconnect traverse points, use traditional methods to connect traverse points to pre-existing networks or create a GPS network to overlay all or parts of the traverse.
Or you can ignore the burden of your job and say "Good enough."
Paul in PA
And If You Do Not Make Enough Observations
to show a normal distribution, you have little right to adjust.
Paul in PA
And If You Do Not Make Enough Observations
>And If You Do Not Make Enough Observations to show a normal distribution, you have little right to adjust.
People have been adjusting with the compass rule for a century or two. LS is at least as good as compass rule and more flexible in what observations it will accept.
If you have few observations and give it honest standard errors, the LS analysis post-processing sigmas will tell you that you don't have tight results.
More observations are better. But your requirement, which seems to be to take enough observations of each variable to demonstrate a normal distribution in measurements of that variable, is too high a burden. It should be acceptable to do what you readily can to remove systematic errors and then go with the LS analysis.
And this has nothing to do with whether the math of the problem LS was defined to work on calls for F-distribution or Rayleigh distribution in the ellipses.
A Lot More Observations Or A Few More Redundancies
Will give equivalent results.
Economically a few redundancies is less field time.
More observations improves the statistical angle distance but the other random errors still remain.
Paul in PA
Chi-Square check..
Been wondering if the Chi-Square test should fit into this discussion. Since the issues are revolving around redundancy; would it also be prudent to determine whether or not the data is a good fit? Doesn't that test check for basic blunders? Or am I missing something. The ellipsis errors may be reduced by first confirming how well the data is performing to expectations. Please clarify since I'm somewhat new to this. Thanks.
Chi-Square check..
> Doesn't that test check for basic blunders?
No, the Chi-Square test does not check for basic blunder. You should never adjust data until all of the blunders have been removed. You also never adjust data that contains significant systematic error. Those errors are to be accounted for and resolved so that the only error left is random error.
All the Chi-Square test does is check to see if the magnitude of the error in the data fits your standard errors. (I refer to the standard errors as your expectation of error or your error budget.)
Bad data and bad expectations match so you can pass the Chi-Square test with bad data.
Good data and bad expectations do not match so you can fail the Chi-Square test with good data.
My opinion is that "closure" means very little and the Chi-Square test means a little something but not nearly as much as most people think.
Larry P
> > Does anyone here have enough understanding of the theory to either agree or argue with me? Is there some assumption or interpretation I shouldn’t be making?
> I do not know enough to argue, although I try to follow along. It interests me.
>
> I met Professors Wolf and Ghilani once, for a two day seminar. Both were very personable and approachable. Professor Wolf isn't answering his mail anymore, but I'm pretty sure that Professor Ghilani would be tickled to hear from you. Drop him a line.
>
> BTW, if you do get a letter from Professor Wolf, the stamp will be worth saving.
Prof Wolf is no longer with us, its been a while.
And If You Do Not Make Enough Observations
> >And If You Do Not Make Enough Observations to show a normal distribution, you have little right to adjust.
> More observations are better. But your requirement, which seems to be to take enough observations of each variable to demonstrate a normal distribution in measurements of that variable, is too high a burden. It should be acceptable to do what you readily can to remove systematic errors and then go with the LS analysis.
Yes, exactly. When a surveyor is using equipment, methods, and procedures that have errors that are well characterized beforehand, it's a bit ridiculous to try to work them out de novo, as if all are wholly unknown quantities to be determined by testing. It is much, much more efficient to test the equipment to derive the standard errors of different parts of the measurement process and then simply verify by testing of residuals from a LS adjustment of the survey that the weights derived from them were not unrealistic.
I was really hoping someone would tell me what other commercial software uses for the factor to get from post-processing standard deviation to 95% confidence.
Also, any references besides Wolf & Ghilani that recommend a formula.
Chi-Square check..
Hmmm, basic blunders was a poor choice of words. Probably should have said data corruption or possibly inconsistant data. It may 'discover' data corruption. If so, then maybe fewer observations would be necessary and still achieve a good LS adjustment once the inconsistancies are eliminated.
Redundancy also means more field time hence more money spent. Still need a certain amount of extra observations; just minimize what is required for the job is what I was going for.
> I was really hoping someone would tell me what other commercial software uses for the factor to get from post-processing standard deviation to 95% confidence.
>
> Also, any references besides Wolf & Ghilani that recommend a formula.
Well, for the record, Star*Net's use of the factor of 2.447 to enlarge the standard error ellipse to 95%-confidence comes from E.M. Mikhail's "Observations and Least Squares", Harper & Row, Publishers, 1976
> I was really hoping someone would tell me what other commercial software uses for the factor to get from post-processing standard deviation to 95% confidence.
The only verbiage I see in the Star*Net manual is "[t]he ellipse itself is computed by STAR*NET from the standard errors in Northing and Easting and the correlation between those standard errors." No mention of actual factors or algorithm.