> Based upon your results, I do not see anything that appears to indicate poor technique, blunders, or any significant errors that would require that level of post processing of OPUS values.
>
> I am confused as to why you would do it??! OPUS has statistical/quality indicators already. Was there something that clued you to the possibility of of an error? Does Star*Net do a better job of ferreting out blunders?
>
> I'm asking because I just do not know. Thanks.
One of the purposes of using least squares adjustments is to get realistic estimates of the uncertainties of various computed quantities, including the coordinates of control points and boundary markers positioned by the survey. In other words, since the survey will yield NAD83 coordinates for some boundary marker found or set, part of the exercise involves estimating the uncertainties of those coordinates. I'm assuming that it's well understood that having coordinates accurately expressed in relation to some independently reproducible coordinate system is a good thing.
I find that the easiest way to get the best results is to position some operationally convenient control point(s) on a project via OPUS and then extend the survey from those points to the other control points and boundary markers, both by GPS and conventional measurements as is most efficient. So the uncertainties of those other control points and boundary markers are a result of the propagation of uncertaintes from at least two sources:
a) the actual connection to NAD83 (the uncertainties in the OPUS-derive coordinates) and
b) the uncertainties in the adjusted combinations of GPS vectors and/or conventional measurements from the main control with OPUS-derived coordinates.
Just specifying that the OPUS-derived coordinates are "right on the money" is unrealistic because they will contain errors of some magnitude. So the problem is how to efficiently estimate the uncertainties in the OPUS-derived coordinates. Using the covariance data from the OPUS solution is very simple.
Then, with a realistic uncertainty estimate of the NAD83 coordinates of the main control, the uncertainties of the other control points and boundary markers can be obtained just as one normally would when the GPS vectors and/or conventional measurements are adjusted in combination.
On large projects, I've found that occasionally it is simply operationally inconvenient to connect some remote part of the project to the main control points, so using OPUS and having realistic estimates of the uncertainties of the OPUS-derived coordinates of that detached part of the project means that you can do the ordinary QC such as estimating the uncertainties of bearing and distances computed between two different sets of control and boundary markers tied separately to the CORS network via OPUS.
What it also means is that projects separately tied to NAD83 via OPUS can be combined in a single adjustment if the uncertainties in the NAD83 connections are realistically estimated.
It also means that if one returns to a project at some time in the future after wholesale destruction of control and boundary markers, knowing what the uncertainties in the NAD83 connections were originally facilitates using OPUS to re-establish the same coordinate system in a way that involves the least effort and maximum confidence.
I don't know of a better way to build confidence in the nuts and bolts of a survey than having redundant connections to NAD83 and adjusting those redundant observations in a way that both validates assumptions made beforehand and generates more reliable measures of uncertainty than OPUS provides.
> I wish that OPUS had a more clear explanation of it's 'Quality' indicators. Did some research on it and it leaves my head hurting :). I knew it to be rigorous already and it gives me plenty of confidence in the results I get back.
The obvious way to get confidence is just to get more than one OPUS solution on the same point and to adjust them both using the weights that OPUS reports in the covariance matrices for the solutions (scaled as experience shows is appropriate) and verify that the residuals are consistent with that weighting scheme (standardized residuals aren't excessive).
> If this was done as an academic exercise, then it may be worthwhile. But for any practical work, I just could not see doing it on a day to day basis.
No, this very definitely is not an academic exercise if one wants accurate NAD83 coordinates. I suppose if a person just wanted decimeter coordinates it might not be worthwhile.
> I doubt that Kent makes a habit of doing this sort of in depth analysis solely of the OPUS results for every job. It's an academic exercise.
Well, in the example I posted above, that control point was the one control point from which a whole bunch of stuff was positioned via GPS and conventional observations. That included original survey corners made in 1876, later resurvey corners, some private boundary markers showing erroneous ideas about where the original survey lines were located, and about six miles of a road that was the subject of litigation.
This was in a remote area where getting accurate NAD83 coordinates via the CORS network was really the only feasible alternative and OPUS was the best way to connect to the CORS network. The multiple OPUS solutions came along for virtually free since the base receiver occupied the station while secondary GPS vectors were surveyed from it. It would be sort of nutty to just toss all the redundant information overboard when incorporating it into an adjustment is so simple and gives an answer that is so excellent.
> .... The multiple OPUS solutions came along for virtually free since the base receiver occupied the station while secondary GPS vectors were surveyed from it. It would be sort of nutty to just toss all the redundant information overboard when incorporating it into an adjustment is so simple and gives an answer that is so excellent.
:good: My thinking exactly.
> No, this very definitely is not an academic exercise if one wants accurate NAD83 coordinates. I suppose if a person just wanted decimeter coordinates it might not be worthwhile.
Whoa. This is where I'm lost. Your values were from a single static session, broken down into multiple fast static sessions. Each were well within acceptable error magnitudes. The worst was your last "day".
1Day197d Delta-N 0.0166 0.0166 0.0224 0.7
1 Delta-E 0.0231 0.0231 0.0261 0.9
Delta-U 0.1675 0.1675 0.1985 0.8
Length 0.1699>
Which I would have dismissed out of the batch and held the rest. Even though it was acceptable as shown by the standardized residuals.
How does the "decimeter" coordinate come into play? OPUS has estimated errors next to it's output to gauge the quality.
eg:
NAV FILE: brdc0880.14n OBS USED: 3411 / 3510 : 97%
ANT NAME: TRM5800 NONE QUALITY IND. 9.88/ 19.92
ARP HEIGHT: 2.000 NORMALIZED RMS: 0.373
REF FRAME: NAD_83(2011)(EPOCH:2010.0000) IGS08 (EPOCH:2014.24015)
X: -2493399.796(m) 0.012(m) -2493400.682(m) 0.012(m)
Y: -4681256.738(m) 0.021(m) -4681255.306(m) 0.021(m)
Z: 3530596.944(m) 0.019(m) 3530596.931(m) 0.019(m)
Now the lost part:
With the X: and Y: sub (m) errors showing are they deliberately misleading me as to the accuracy of the results!? Or have I simply misunderstood what that is implying.
Any clarification would be most helpful. Thanks.
> Your values were from a single static session, broken down into multiple fast static sessions. Each were well within acceptable error magnitudes. The worst was your last "day".
>
>* Begin Quote
[pre]1Day197d Delta-N 0.0166 0.0166 0.0224 0.7
1 Delta-E 0.0231 0.0231 0.0261 0.9
Delta-U 0.1675 0.1675 0.1985 0.8
Length 0.1699>[/pre]
>* End Quote
>
> Which I would have dismissed out of the batch and held the rest. Even though it was acceptable as shown by the standardized residuals.
Well the largest residual was in the Up component. As you can see from the standard error that was used in weighting the Up component (+/-0.1985 ft.), that residual was not unusually large. There was no reason to reject that OPUS-RS solution since the N and E components had much smaller uncertainties and were useful.
The large residual error in the Up component would have been a problem if one were just taking a average of several OPUS positions on a point instead of making a least squares estimate using weights for each position derived from the uncertainties (as one can do in Star*Net).
> How does the "decimeter" coordinate come into play? OPUS has estimated errors next to it's output to gauge the quality.
Well, in my view, the only real way to demonstrate the reliability of an OPUS solution is to have an independent check. If surveyors are using OPUS solutions from short occupations in suboptimal settings, I would tend to place a low estimate on the reliability of the results, which is what prompted my assessment that those results are more GIS-grade than survey-grade. The obvious best check on reliability is to just get two or more OPUS solutions on the same point. Another independent check would be to locate a station positioned via OPUS, by connection to another OPUS station with a GPS vector(s) of more than adequate quality.
If I include OPUS or OPUS-RS results in an adjustment, I do it by including the vectors rather than the positions.
> If I include OPUS or OPUS-RS results in an adjustment, I do it by including the vectors rather than the positions.
This method is a neat workaround for Star*Net, i.e. using the position with its uncertainty. The overhead is the generation of multiple points with different names (10Day231, 10Day215, etc.) surrounding the actual control point, but as long as those other points are named in a way that they can't be confused with the actual adjusted position of the point, it seems harmless.
I'm sorry, but how do I locate the original post that describes the process used ??ÿ I'm not able to follow the link provided by the OP.
?ÿ
Thanks
Much obliged
does anyone know if the gfile produced by OPUS Projects is usable in Star*net
does anyone know if the gfile produced by OPUS Projects is usable in Star*net
It's been awhile since I've done that, you *may* have to edit out the I (session models) records first.?ÿ You may need to edit either the C record in the g-file, or the G1 record in the converted Star*Net file, to get your point identifiers right.
Note that character position in a g-file record is important, so when you edit a g-file you need to make sure you have the information in the right columns.?ÿ I recommend having a copy of the NGS Annex N (Global Positioning System Data Transfer Format) document handy so you can decipher the g-file records properly.?ÿ
@danemince@yahoocom
I write my own translators for star*net (total station, leveling, and GPS), so I am not sure if one can directly use the g-file but certainly the information in there can be used when reformatted if not directly usable by import.
For those unaware of the NGS GPS Data Transfer format aka “GFILE” Here is a vector record from an OPUS-extended solution and the relevant portions of Annex N: https://www.ngs.noaa.gov/FGCS/BlueBook/pdf/Annex_N.pdf
Because I do a lot of bluebook projects (and don't use OPUS projects), I wrote software to create B and G files from trimble data (B file) and TBC processing (g file). One thing to note if anyone is trying to read or write to a g file format (and the b file as well) is that everything is extremely column sensitive, and there are no decimal points. To write vector components to the g file you need to multiply by 10000 and treat it like an integer, and to write correlation terms you need to multiply by 1000000 and also treat it like an integer (i.e. no decimal)