Over the past number of years their have been a number of posts debating the most appropriate way of sizing 95% confidence error ellipses.
Here is a listing of the posts that I am aware of:
Error ellipses using a priori/a posteriori estimates
Least Squares Ellipse Confidence Calculations?
Relative Positional Precision - Why did I fail?
In the last post, I was trying to figure out why with Carlson SurvNET I kept failing the ALTA Relative Positional Precision test, but when run through STAR*NET it passed. I later found out it came down to two competing methods of computing 95% confidence error ellipses. One that uses the Normal Distribution factor of 2.447 based on population statistics (STAR*NET) and the other that uses the F-Distribution factor based on sample statistics which varies by the amount of redundancy you have in the network (SurvNET). For a standard closed loop traverse, this amounted to SurvNET being nearly twice as hard to pass the ALTA standard than if you were to use STAR*NET.
However, It has now come to my attention that Carlson SurvNET 14 is now using the Normal Distribution (see below).
This should make It easier on us SurvNET users to pass our ALTA tests in the future. I like it. Thoughts?
Bow Tie Surveyor, post: 405792, member: 6939 wrote: This should make It easier on us SurvNET users to pass our ALTA tests in the future. I like it. Thoughts?
The essential ingredient is having good apriori estimates of the standard errors of observations. The residuals testing in the network adjustement should serve to simply test for blunders. If you are relying upon the residuals from the network adjustment to work out the standard errors, bhen those standard errors are uncertain and you pay the price in terms of the uncertainties that are propagated into the error ellipse calculations.
In other words, good apriori estimates means you find that the same values work from project to project when the same equipment and procedures are used.
Here is an example from a recent project that was an update of a Florida SOP boundary survey that I did in 2014 to a 2016 ALTA survey. The traverse was a simple closed loop traverse with the closing angle observed (see below). The unbalanced misclosure was 0.122' over 2347.417' traversed for a precision of 1 in 19,272. The balanced misclosure was 0.020' over 2347.417' traversed for a precision of 1 in 117,160.
Even though it was not an ALTA survey back in 2014, I did an ALTA test just for the heck of it using Carlson SurvNET 10 and here were my results:
The connections with the asterisks did not pass the ALTA test, so 4 of 7 failed. A standard closed traverse only yields 3 degrees of freedom, so the factors produced from the F-distribution were not kind to my error ellipses.
When I ran it again recently with Carlson SurvNET 14, here is what I got:
Now every connection passes with flying colors. Previously with the F-distribution at 3 degrees of freedom SurvNET 10 scaled up my standard error ellipses by 4.31 to get the 95% confidence error ellipses vs. the 2.4477 factor using the normal distribution now used by SurvNET 14. Quite a difference.