I keep seeing this comment/caveat...
ÛÏI only use OPUS as a checkÛ
What the hell does that mean?
I certainly understand scenarios where ÛÏyouÛ are constrained to legacy data (estimates of the geodetic positions of physical monuments that ÛÏcontrolÛ the project at hand), but I have to question the validity of comparing current positional estimates to legacy estimates relative to a particular ÛÏmonument.Û
Considering the sophistication an OPUS_Static solution, what makes you think that a ÛÏroll-yer-ownÛ solution is ÛÏbetterÛ (in the greater scheme of things) than an OPUS solution?
Just wondering...
Loyal
Loyal. I use OPUS exclusively for my consultant work. Since I use "legacy" equipment it provides the best solutions without having to become a software master at geodetics. I know what errors to look for and use the standard techniques for data collection. Compare results to record data and blaze on. I do not understand why anyone would just use it for a "check". It is much much better than that. The fact that it is a free service is just icing on the cake.
In my case it means that I have coordinates that some "expert" is telling me is state plane coordinates or SPC with a surface adjustment factor. I check it with opus and about half the time they are correct.
I have a set of plans in my truck that the control points listed in the survey data before the plan/profile sheets are state plane. The control listed on the plan/profile sheets along with the alignment data are adjusted to surface.
Another set of plans said the coordinates are adjusted to surface but are not.
Another set we finished about 3 years ago, I had two different job files for it. The road was built using adjusted SPC. Anything that was tied to the R/W was staked with a different calibration that had a skew. A big city engineering firm took perfectly good data provided by a reputable company in Wichita Falls and screwed it all up. There was a rotation in the R/W data causing about 1.5' error at the start of the job and 3.0' error at the end of the job.
The only common denominator is that the problem can be traced back to a big consultant firm in a big city somewhere and you can never get the guy on the phone that actually did the work.
James
I think I know which post you are referring to and I also had the same questions. Perhaps if he could chime in on this I would be very interested to know what software he uses as his standard gospel solution instead of the OPUS solution. I have had really good luck with OPUS and love that it is free and that we get such good solutions.
Anxiously awaiting......
HEY! I ONLY USE OPUS AS A CHECK.
Of course it is a check for OPUS-RS.
Paul in PA
FWIW one of the motivating factors for the development of OPUS was experience with users submitting processed GPS data to NGS. It was often the case that they encountered some less than optimal procedures. Also many of the commercial packages did not include elements needed for the correct solution of long baselines.
The goal was to have a uniform repeatable procedure for the solution of baselines. Solutions would incorporate the latest developments in modeling and statistical techniques.
Those of us who worked in the early days of GPS processing saw variations in the way the different commercial packages dealt with issues like integer and cycle slip fixing. In fact, I recollect early versions of the Leica software not allowing a fixed integer solution for baselines beyond short distances. I also have seen folks manipulate some software controls in ways they did not understand.
Does OPUS always yield the best solution? No. In what ways can a user obtain a better solution? One way is to look at the extended output and see whether there is a problem SV or a problem time interval.
Spending hours attempting to get a solution with good looking statistics, in my opinion, is often less worthwhile than reobserving (if the baselines are essential).
I write this acknowledging that we lose something when we rely on black boxes without understanding what they are doing.
Cheers,
DMM
When I use OPUS usually manually choose the CORS that are used and I get the extended solution, which I import to StarNet. Often I get a second solution using different CORS. StarNet then incorporates error of the OPUS position in its adjustment. My final positions will differ slightly from those reported by OPUS.
We establish control with OPUS all the time. I also use solutions to verify the crew set on the correct station at the height shown in the notes. Why not? It's too easy not to. At the same time I'm not going to run every occupation through OPUS rather than hold my network values. Sometimes using it as a check only is a better option...
Loyal, I have cleaned up so many errant customer jobs over the years (that they have processed in GNSSSol and SPSO) that I have a hard time trusting anyone else to manually process observations. Of course, no one ever calls me with a successful job so I have gotten a little one-sided visioned on this.
On the other hand, I have checked OPUS solutions manually and have a high confidence in my ability to process and match. So on the occasional occupation that won't work well in OPUS, I do have dependable back up methods. For the past 6 months I have been pushing every important OPUS solution to RTX as a check too. And the important stuff I send to AUSPOS for a third opinion. I can't remember a single time that I have not been impressed by how closely everything matches.
I have only been 'doing' SPSO For the past 6 months, and I have been processing most manual jobs in both GNSSSol and SPSO just to get a good feeling for SPSO.
So my short answer is I am checking OPUS with manual processing and filling in holes (noisy solutions) in OPUS with manual tools.
That said, I think that OPUS handles the details of dynamic positions for CORS sites, solid tide modeling for long baselines better than the other tools that i have at my disposal. The lack of multi constellation support is a huge negative though and usually accounts for why jobs that process great in manual tools, fail in OPUS.
M
I process all of our static sessions in TBC using CORS data , and I always run them through OPUS as well (and often RTX-PP). Which one I end up holding depends on the occupation time, the statistics, whether I have redundancy, and other factors. In general OPUS seems to be a little better with long (5 - 6 hour) occupations and TBC seems to do a little better with 2 - 3 hour occupations, but they're always within 0.10' of each other, and usually better than that. Here in south Louisiana the CORS stations don't agree with each other all that well in the vertical, and OPUS tends to yield more repeatable solutions.
On very short observations I can at least get a fixed integer solution to the nearest CORS from TBC - I tell my crews that even ten minutes of static data is better than nothing at all.
I rarely do static work so I'm always a little worried about my post processing skills. I run my static data and compare it to a couple of OPUS solutions as just a check.
I start all new projects with OPUS.
Montana DOT is of the opinion that they can do way better with TBC. So yeah, sometimes I do use OPUS for a check on what they want me to hold as gospel. What's 5cm horizontal displacement on a passive NGS mark??? Joking, I would honestly prefer that governmental agencies train themselves on, and require their consultants to use OPUS Projects.
Are you guys talking Opus Projects where the points are proccessed as a network or just plain opus? I would say my answer would be I still process eveything as a network. I also send to opus and then do a comparision as a sanity check. With the invent of opus projects a person could basically do the network there without any special software needed.
Loyal: I guess you are referring to me. First of all, I don't mean to disparage OPUS or infer that it is not accurate or reliable, nor do I mean to belittle anyone who does use OPUS. It is definitely an excellent resource, and for users who don't post process anything, it is indispensable.
There are several reasons why I prefer to do my own processing. I have my own work flow setup over many years.
1) I certainly would use OPUS to position a point. But, many times I have a network of points, many of which are intervisible and/or close together, and observed together. We can field up to seven dual frequency units at a time. By processing this data myself, I can preserve the connections between them. I have seen it happen many times where someone sets intervisible pairs (non simultaneous or simultaneous) and processes both with OPUS, and then publishes the azimuth between the opus derived solutions. Yes, OPUS Projects can take care of this, but requires longer observation times.
2) there are quite a few CORS around that were setup by VRS operators that are NOT in the national network, and many (most?) of the PBO stations are not used by OPUS.
3) I often do combined adjustments with GPS, leveling, and conventional, over small areas. Putting multiple OPUS solutions into that mix would warp the network (related to #1 above). Also, many of these projects must use legacy coordinates for continuity. I do have a workflow setup to use the OPUS results in adjustments, but I prefer to have all of the data from the same processor when possible.
4) NGS uses coordinates for OPUS processing that are not necessarily the most up-to-date available due to their reluctance to be continuously modifying coordinates for small differences (which I agree with). For critical networks I will look at the 60 day and long term plots and revise the coordinates if necessary.
For deformation surveys I will submit all long sessions to OPUS (often on multiple points), but I do not want to include those as constraints. I do, however, include the OPUS solution(s) in the report as an external check on the stability and orientation of the network.
We also do a lot of photo control and lidar control surveys. Typically in these the vertical is more critical than the horizontal. I have found OPUS-RS to be not reliable enough for the short (15 minute) occupations we do in this type of survey.
I just did an elevation cert using OPUS as a check. 2 hours of data and it misses local RTN 0.16' vertical. I send 12 hours static from 2 of the fixed RTN bases up and nail them flat 3d.
I'll hold the local 5k baselines all day long over the OPUS solution. If I had sufficient data for an OPUS project it might be a different story. The simple fact is I have enough passive control and access to local RTN base data that OPUS fits the check category of my work flow better. In this case to the tune of several hundred a month in lower premiums for my client...
Tim Reed, post: 337443, member: 420 wrote: I start all new projects with OPUS.
Montana DOT is of the opinion that they can do way better with TBC. So yeah, sometimes I do use OPUS for a check on what they want me to hold as gospel. What's 5cm horizontal displacement on a passive NGS mark??? Joking, I would honestly prefer that governmental agencies train themselves on, and require their consultants to use OPUS Projects.
Mr. Reed
Change is in the wind at MDT HQ from what I hear. You should try and get to know the new guy.
BTW, nice attachment. Thanks for the new screen saver......dig, out!
The last couple of big networks I did were processed in both OPUS Projects and TBC; I ultimately held the numbers from OPUS Projects, I feel like their software does a better job of modeling the sessions and their error estimate numbers were unreal. But there was very little difference in the coordinates that I compared, typically 2 - 3 hundredths. All of those points had at least two fully independent observations on them.
Lee D, post: 337463, member: 7971 wrote: and their error estimate numbers were unreal.
Unreal, indeed. I've been using OPUS Projects for about a year and a half now to control static networks, incorporating the Projects positions and error estimates in an adjustment that includes lots of 1-hour observations. Although I believe the Projects positions are high quality, the error estimates -- often only a mm or so -- typically have to be inflated to around the cm level in order to work with the other observations.
Jim Frame, post: 337470, member: 10 wrote: the error estimates -- often only a mm or so -- typically have to be inflated to around the cm level in order to work with the other observations.
I seem to remember reading somewhere that those numbers should be multiplied by a factor of ten to provide realistic positional accuracies. Honestly in 165 miles of static control - 36 new points in all - there wasn't enough difference between OPUS Projects and TBC to be able to definitively say that one was any better than the other.
I find the error estimates to be reasonable if you add the uncertainties common to your area and dataset. The work I did today came in with just under 2 cm vertical estimate. I know that 1.5 hours of data in the area should get another 4 cm added. My .16 difference fits that fairly close.
I will use a multiplier on some data, but I haven't found it to work so well with OPUS error estimates. Anyone else isolate the constant?
Tom