So I'm testing the validity of TBC's Network Adjustment report and running the same adjustment in S*N. First, despite the settings being the same in each project, my error values are different. Second, I'm scratching my head wondering why when I only hold one station (CORS), the errors look fine (<0.03' H/V); when I fix two horizontally and one vertically, the vertical shoots up to over half a foot. I've checked the coordinate values and they match. Any and all help much appreciated!
Ok, so one thing I figured out is I have to supply initial error estimates for the stations. Since they're CORS, I used 1cm as the limit for the elevation and other station I was testing for fit. I first ran a minimally constrained to test, and everything fit within hundredths, albeit TBC seems to claim the errors basically nothing, and then once I constrained a second station horizontally (since now in CA you have to tie your survey to at least two CSRN stations horizontally), the scalar I must use in the GPS options in S*N is literally "30" times the supplied weighting in the covariance matrices; just seems pretty high. Another thing I noticed is that the covariance matrices between TBC and S*N aren't the same values. SMH. What am I doing wrong?
TBC cov matrix for P242-P243
S*N for same stations:
I am not familiar with TBC, so I can't speak to the covariance matrix information shown there, however when exporting from Leica Infinity in SKI-ASCII format, there is a separate value included which is referred to as m0 and you divide the covariance matrix values in that file by 1/m0^2 to get the STAR*NET covariance matrix values. TBC maybe something similar?
As to the scalar comment, let me guess; you're post-processing 1 second interval static data, correct? This will always result in overly optimistic statistics for the post processed vectors in my experience. Post processing at 30 second interval will give more realistic results and you will not need to apply the scalar. In Infinity there is a setting to sample at 30 second interval (or whatever rate you want), even if data is collected at higher rates, likely same options available in TBC I suspect.
Is your confidence level display in TBC set to 95%?
What are your a priori error estimates (default standard errors) set in both programs?
What about lat/long deflections in TBC? As soon as you constrain to multiple vertical values, this can have a substantial effect on your network, especially if it is large.
How are you loading the vectors into StarNet? From post-processing in TBC or from something else?
1cm is pretty arbitrary for standard deviations on the control without something to back it up. Did you take a look at the short-term time series?
Where are you getting your coordinates for the CORS? Depending on where you're working, there is a lot of movement in CA that has not necessarily been captured in the latest coordinates from NGS. Are you looking at the CSRC?
There are subtle differences in the way the two programs work. We run both, and it's almost always very nearly the same coordinates with roughly the same statistics when compared. But it's never exactly the same. Code differs, processing workflow under the hood differs.
Some Least Squares programs, like Star*Net, assume you know your standard errors from prior experience and use Rayleigh distribution statistics.
Others use your given standard errors only for relative weighting, and estimate a scale factor for your standard errors from the data on this particular run, and use F-distribution statistics.
I don't know what TBC does.
As to the scalar comment, let me guess; you're post-processing 1 second interval static data, correct? This will always result in overly optimistic statistics for the post processed vectors in my experience. Post processing at 30 second interval will give more realistic results and you will not need to apply the scalar. In Infinity there is a setting to sample at 30 second interval (or whatever rate you want), even if data is collected at higher rates, likely same options available in TBC I suspect.
Good point. The default option for processing interval in TBC is "automatic" - TBC will automatically choose the ideal processing interval based on the baseline observation time. It's helpful when you have a wide variation in observation time within your network; it will evaluate each baseline separately. Processes more quickly too.
You can of course force it to process at whatever interval you like, but generally speaking the processor knows best.
In TBC what are you holding for vertical. Ellipsoid or orthometrc elevation. And if using or holding elevation what is your geoid quality set to by chance.
As always the question is: Which answer is correct?
That will tell which process is "doing it right".