Lifelong ground-pounder-compass-rule guy, being dragged into the 21st century here, please be gentle.
So, I have a random traverse comprised of multiple sets of angles and distances observed with a total station. On 4 of the twelve stations, I obtained 20-40 minute GPS observations with borrowed dual frequency gear. I uploaded the resulting data files to OPUS-RS and they came back OK, and the data seems OK, based on my reading of the OPUS website.
So my question: Carlson SurvNet allows the use of "control points" and with them allows the input of a common N & E standard error for all control points, or a separate N & E standard error for each individual control point. I assume by "Standard Error", Carlson means 1-sigma. (yes, I know about assumptions)
Anyway, If I enter the SPC values as control points in SurvNet, how do I assign a standard error to each point from the info provided in the OPUS solution report?
Thanks,
SS
The OPUS-RS results are NOT going to help your traverse adjustment, in fact it will likely introduce error.
The relative accuracy of OPUS-RS can be .1-.2' horizontal and .2-.4' vertical on each point. If you ran the static occupations simultaneously on all four points, then you will have data that will greatly help your traverse adjustment. Or if you have CORS nearby, or if you ran a traverse with the GPS by leap frogging, these will help. But a single OPUS-RS solution is just a radial point in your traverse, not tied down. The baseline lengths from the CORS used to compute the OPUS-RS can be over 20-30 miles away.
Things you need to look at also is how far away are CORS stations, that can be used in the post processing along with your static data.
Feel free to give me a call at (315)831-8175.
LeeGreen.com
As with everything in Carlson there are a few ways to do this:
If you have a 2D traverse then you can create a control point file CSV ASCII NEZ with standard errors. The Carlson manual states "Standard errors are the expected measurement errors based on the type of equipment and field procedures being used" I would say a good start would be the standard error of the point plus any setup error. Run the adjustment, and check your results. Then if necessary revise your standard error estimates.
If you have a 3D traverse then you can create a control point file CSV ASCII P,Lat,Long,Ortho,D. You then have to set the standard errors in the Standard Errors Tab under Coordinate Standard Errors. The downside to this method is you can not set each points standard error individually. If all your standard errors for each of your OPUS points are very similar then this should not be a problem. You can then compare Carlson's SPC with OPUS's SPC as a check.
If you have a newer version of Carlson you can import the NGS G-File from the OPUS extended report. I personally have not tried this yet.
Kent McMillian posted a really great method for entering multiple OPUS solutions into Star*Net. I have used this method with Carlson, as Carlson will read a Star*Net .GPS file.
I have found that OPUS-RS gives very good results in my area.
> The OPUS-RS results are NOT going to help your traverse adjustment, in fact it will likely introduce error.
If you held them fixed they certainly would. But if you gave the OPUS-RS positions a really large standard error (say 1 foot) they wouldn't hold any weight. It would just be a way to get the traverse data close to a state plane basis while blunder detecting an open traverse.
But that's a rough way to break into LS.
> But if you gave the OPUS-RS positions a really large standard error (say 1 foot) they wouldn't hold any weight. It would just be a way to get the traverse data close to a state plane basis while blunder detecting an open traverse.
>
> But that's a rough way to break into LS.
Well, if you want to use just the OPUS-RS standard error estimates, why wouldn't you go to the ENU Covariance matrix in the extended output OPUS report?
Here's an example of one:
[pre]
Covariance Matrix for the enu OPUS Position (meters^2).
0.0000005134 -0.0000000832 0.0000012033
-0.0000000832 0.0000007129 0.0000003103
0.0000012033 0.0000003103 0.0000770337
Horizontal network accuracy = 0.00193 meters.
Vertical network accuracy = 0.01721 meters.
[/pre]
The format of the matrix is:
[pre]
EE EN EU
NE NN NU
UE UN UU
[/pre]
with the diagonal elements, EE, NN, and UU being the squares of the standard errors of the E, N, & U components of the solution.
So, in the example, the standard errors would be:
[pre]
SQRT(0.0000005134) = 0.00072m = s.e.(E)
SQRT(0.0000007129) = 0.00084m = s.e.(N)
SQRT(0.0000770337) = 0.0088m = s.e.(U)
[/pre]
In Central Texas, I'd expect the standard errors would need to be multiplied by a factor of about 3.0, so that's what I'd do to relax them a bit.
That would give:
[pre]
0.00072m x 3 = 0.0022m = s.e.(E)
0.00084m x 3 = 0.0025m = s.e.(N)
0.0088m x 3 = 0.026m = s.e.(U)
[/pre]
If the method by which the GPS antenna was centered over the ground mark had a significant standard error, then that would have to be taken into account also by forming the root sum of squares of the components.
That is, suppose the antenna has a random centering error over the ground mark of +/-0.001m in E and N components. Then :
s.e.(E)composite = SQRT[ (0.0022m)^2 + (0.001m)^2)] = 0.0024m
and similarly for s.e.(N) and s.e.(U).
Although if Carlson SurvNet can import a Star*Net-format GPS file, it's probably much easier just to create a GPS vector of length 0 with the x,y,z covariances from the OPUS report (appearing before the enu covariance matrix).