In my recent thread the topic came up about a difference between Wolf & Ghilani's equations versus Star*Net regarding the calculation of sigma values and error ellipses.
Possible methods:
1. A priori estimates: use the std err values you provide with your measurements to compute the confidence limits on the results, including error ellipses, no matter how well (or badly) your measurements actually fit.
2. (Wolf & Ghilani equations) Scale the ellipses, sigma_x, etc. by the overall goodness of fit, called sigma_0 or "Total Error Factor" to obtain a posteriori values, i.e. equivalent to re-estimating your std err values.
3. Use the more pessimistic of (1) or (2), i.e. scale if the results are worse than your std err would have predicted.
4. (Star*Net) use the more pessimistic if and only if the Chi-squared test fails. This has some appeal, but it introduces a large discontinuity in error ellipses for a small change in measurements if you just cross the Chi-sq test limit.
5. Other?
Which method does other professional software use?
#===============================================================================
#Here's a stupid little Star*Net file to demonstrate method (4). Un-comment a #different pair of 1-4 distance measurements to change the result.
# Data file to investigate Star*Net use of sigma_0
C 1 0 0 ! !
C 2 400 0 * *
C 3 400 300 * *
C 4 -100 0 * *
D 1-2 400.1 0.1
D 2-3 300.1 0.1
D 3-1 499.9 0.1
B 4-1 90-42-51 0.001
A 1-2-3 90-00-01 10
A 1-2-3 89-59-59 10
A 2-1-4 180-00-00 10
#------------------------------------------------------------
# Pick any pair of measurements 1-4 to chg goodness of fit
# Note that it does not actually affect other points
# with perfect measurement on pt 4, sig_0=0.98, passes Xsq test
# error ellipse for pt 2 is 0.202009
D 1-4 100.0 0.001
D 1-4 100.0 0.001
#with poorer measurements on pt 4, sig_0=1.754 near Xsq limit 1.765
# error ellipse for pt 2 is 0.202009 no chg
#D 1-4 100.00178 0.001
#D 1-4 99.99822 0.001
#with worse measurements on pt 4, sig_0=1.767 exceeds Xsq limit 1.765
# error ellipse for pt 2 is 0.357016 suddenly large
#D 1-4 100.0018 0.001
#D 1-4 99.9982 0.001
#===============================================================================
I haven't written any software for least squares adjustments of traverses, but my aerotriangulation photogrammetry software uses apriori standard deviations for input estimates and based on the aposteriori standard deviations computed versus the degrees of freedom, I compute the Variance of Unit Weight (VUW). If the VUW is equal to about 1.0, the adjustment is considered to be properly weighted and the resultant variance-covariance matrices when solved for eigenvectors and eigenvalues (error ellipses) will be properly scaled. If VUW is less than 1.0, it is underconstrained and conversely is overconstrained when VUW over 1.0.
Same goes for my geodetic software when solving for three-, four-, or seven-parameter transformations; either Molodensky model or Bursa-Wolfe model. However, even though I solve for the variance-covariance matrix, it is essentially meaningless with respect to a geometric real-world interpretation. (Same goes for leveling networks and/or gravity networks.)