Saturday, I was doing some preliminary work on a project at the old, locally historic, Reo Palm Isle, a now defunct ballroom (read: honky tonk) in a nearby town.
I had a Javad, Triumph1 receiver as a base setup at the office broadcasting over my home internet router in TCP. The site is approximately 8.5 miles North of the office. I was using a Javad Triumph LS as a rover, connected to the internet via my smartphone.
I tied several points in, two in particular, I observed, then set up a Topcon Hiper on one and a Triumph2 on the other for post processing. The Hiper file was submitted to OPUS and both files were processed with the Triumph1 base station at the office. The coordinates observed were:
NAD83 2011 Adjustment, Texas North Central Zone, US ft, NAVD88
Post Processed (Ashtech Solutions) 95% error
Site 1 N 798.195/0.022 E 064.179/0.037 U 291.875/0.050
Site 2 N 740.223/0.022 E 400.848/0.037 U 291.468/0.050
OPUS (peak to peak)
Site 1 N 798.248/0.006m E 064.231/0.004m U 291.988/0.014m
RTK (RMS per RTK software)
Site 1a N 798.206/ E 064.168/ HRMS:0.006m U 291.918/ VRMS:0.007m
Site 1b N 798.188/ E 064.222/ HRMS:0.004m U 291.805/ VRMS:0.005m
average N 798.197 E 064.195 U 291.862
Site 2a N 740.229/ E 400.799/ HRMS:0.005m U 291.366 VRMS:0.009m
Site 2b N 740.164/ E 400.835/ HRMS:0.005m U 291.451 VRMS:0.006m
average N 740.197 E 400.817 U 291.409
I have very limited experience with longer single baseline RTK solutions such as this. Most of my experience has been limited to about 1 mile baselines. This was really impressive to me. I can remember when we first started using post processed single frequency receivers, these sorts of baselines were about the maximum we could expect to get good results and it always seemed that at this range we'd do well to achieve repeatability at about 0.10 foot with an hour or more of observation. These RTK measurements were 60 seconds in duration and repeated well below a tenth of a foot.
Shawn,
What was the purpose of a 60 second observation? Averaging the shots?
just curious, JT
Yes. Generally I use 3 minute observations for control points. I decided to only use 60 seconds for these observations as 3 minutes seemed like it would offer limited or no benefit. It would appear that returning after a two hour wait and observing again really did the trick.
> I have very limited experience with longer single baseline RTK solutions such as this.
The missing pieces that should tell the story a bit better would be:
- the uncertainties of the position of your receiver at the office,
- the covariance matrix for the OPUS solution at Site 1,
- the scalar that Ashtech solutions applied to the static vectors, if any, in computing the 95% confidence uncertainties,
- why the HRMS values for the RTK were so unrealistic (based upon your data).
The obvious next test would be to figure out what number you'll have to scale the Javad RTK processor uncertainties by to make them resemble reality.
Note that all of these matters could be efficiently investigated if you had the vectors (with their covariances or standard errors and correlations), instead of just coordinates.
Serious question: Would a Trimble QC2 record satisfy what you are looking for? From a Trimble JobXML file, extracted from a RTK vector record...
12
0.00774596678093
0.0000296224498
-0.00002244315328
0.00001864001206
0.00008155550313
-0.00005862947728
0.00010032914724
0.60000002384186
I want to apologize for not quite being up to speed on the current discussion. It has been tough to keep up with things lately!
> Serious question: Would a Trimble QC2 record satisfy what you are looking for? From a Trimble JobXML file, extracted from a RTK vector record...
>
>
> 12
> 0.00774596678093
> 0.0000296224498
> -0.00002244315328
> 0.00001864001206
> 0.00008155550313
> -0.00005862947728
> 0.00010032914724
> 0.60000002384186
>
>
> I want to apologize for not quite being up to speed on the current discussion. It has been tough to keep up with things lately!
No apology necessary. The missing bits are the DX, DY, and DZ components of the vector, but the VCV__ lines look like the covariances that represent the uncertainties in the vector. I'm not clear on what "UnitVariance" represents in the QC2 record, but would suppose it is some ratio of actual variance to some a priori value.
> Note that all of these matters could be efficiently investigated if you had the vectors (with their covariances or standard errors and correlations), instead of just coordinates.
Shawn has the gfile, so he does have the required data.
> Shawn has the gfile, so he does have the required data.
That would be the obvious next step, then. To do the exercise that Leon is thinking about doing sometime, collecting RTK positions at some regular interval throughout the day (I suggested 30 minutes) and adjusting them all with the processor estimates of their uncertainties.
Taking it to the next level would include first adjusting two or more static vectors of excellent quality to the same control point from the base and using that condition in the adjustment of the RTK vectors to test the RTK residuals against the processor/RTK controller estimates.
"The missing bits are the DX, DY, and DZ components of the vector..."
-13600.520856387
-20924.434297198
-19848.236511487
0000003b
WIL1
0000003e
101397.26555896
727531.70221219
178.19176109761
0.00769693101481
0.01236665788252
12
true
1.8805006742477
2.4820068463051
0.9936695098877
1.5965286493301
17.663954119933
215
This is the front part of that record. I believe you are correct in your assumption about the UnitVariance record, but would have to dig into that a little more to confirm.
You know, with TDS survey pro one can log at a given elapsed time with RTK. Perhaps setting the RTK rover on a fixed height tripod logging and storing positions every 30 minutes is the way to go. That way nobody has to baby sit the equipment too closely. Running the same expirement with network RT positioning would be interesting.
> You know, with TDS survey pro one can log at a given elapsed time with RTK. Perhaps setting the RTK rover on a fixed height tripod logging and storing positions every 30 minutes is the way to go. That way nobody has to baby sit the equipment too closely. Running the same expirement with network RT positioning would be interesting.
It seems to me that the key elements to the test are just:
- positions from same number of epochs as usually observed in the field,
- sessions evenly distributed throughout working hours (same number before and after noon),
- base and rover separated by at least 2,000 ft. (preferably more),
- setting realistic as far as sources of multipath (i.e. with some normal sources),
- getting positions from about 17 or more sessions.
- having uncertainty estimates for individual positions as produced by RTK processor.
You should get RTK and then you can test it all you want in a myriad of scenarios.
> You should get RTK and then you can test it all you want in a myriad of scenarios.
Why would anyone want to buy something first and then figure out what exactly they bought? The tests I've suggested are fairly fundamental stuff. The only reason I can think of that the GPS manufacturers wouldn't have already fixed (no pun intended) the problem of optimistic uncertainty estimates is that they think it would reduce sales to do so.
> The tests I've suggested are fairly fundamental stuff.
I would agree, except for "realistic sources of multipath", which is fairly subjective and highly variable depending on satellite constellation.
I'm returning to this site today and plan to get additional data on these points. I'll try to get for you the data you've requested:
RTK coordinate output with RMS values
RTK vector output (NGS g file and StarNet ASCII)
Post Processed vector report from Ashtech Solutions
OPUS extended output report(s)
How do you propose I get this data to you?
Regarding overly optimistic error estimates, RTK receivers (depending on brand) generally output some form of solution error estimate (instantaneous, epoch by epoch) and also the RMS of the average of collected epochs. The solution error estimate is often pretty large and from what I understand, not very reliable error estimate but suffices as an indicator of the reliability of fixed ambiguities. RMS values are based on the average of individual epochs in a session and so are dependent on the precision of the average (shorter data sets, which are highly correlated may give a very low RMS). Thus the real test of an RTK position is redundancy. Statistics from the data of a single short observation are simply not going to be terribly reliable (not unlike shooting a distance with an EDM 3 times vs 10 times from the same instrument setup to the same reflector setup), and so, some experience, whether qualitative or quantitative is in order. I've done hours of testing with rovers collecting single epochs every minute and compared the single epoch positions to the overall average to see what practical precision I can expect from various rovers - which would be quantitative, as I can assign some real values to these tests. I've also used RTK in the field and have a "feel" for what to expect based on experience, returning to a point and seeing repeatability or measuring between two RTK points with a total station and seeing agreement, which I'd categorize as qualitative, as I have no real values I can assign to these observations.
I've only just begun to experiment with vectors from RTK, although the capability has been there with several brands for a long time, I just haven't felt the need to exploit that capability as redundancy has always struck me as the best determinant for my positions (just as NGS suggests with OPUS). Because of this, the idea of RTK vectors is not a high priority to me, as it seems to be for you, which is why I suggested you obtain an RTK system and conduct the experiments for yourself as you will be able to conduct the tests exactly to your specifications. You don't have to buy the equipment, rent it or find a colleague in your area who has a relatively modern system willing to let you borrow it for a weekend.
> > The tests I've suggested are fairly fundamental stuff.
>
> I would agree, except for "realistic sources of multipath", which is fairly subjective and highly variable depending on satellite constellation.
Well, that's the point. Normal sources of multipath such as vegetation or structures should have variable effects that are most easily seen by comparing vectors solutions from sessions under much different constellations.
> I'm returning to this site today and plan to get additional data on these points. I'll try to get for you the data you've requested:
>
> RTK coordinate output with RMS values
> RTK vector output (NGS g file and StarNet ASCII)
> Post Processed vector report from Ashtech Solutions
> OPUS extended output report(s)
>
> How do you propose I get this data to you?
Why not just post it here? There are others interested in this problem who also have the means to use the information.
> RMS values are based on the average of individual epochs in a session and so are dependent on the precision of the average (shorter data sets, which are highly correlated may give a very low RMS).
Which is exactly the reason why a surveyor should be interested in knowing whether an optimism factor will bring those estimates more in line with reality, as can be done with static and PPK positioning.
> I've only just begun to experiment with vectors from RTK, although the capability has been there with several brands for a long time, I just haven't felt the need to exploit that capability as redundancy has always struck me as the best determinant for my positions (just as NGS suggests with OPUS).
So, it sounds as if you are dealing with uncertainties in RTK vectors using the (c) method I previously asked about, assuming that given two or more sets of coordinates on the same point obtained via RTK that the mean cannot be in error by more than the difference in the coordinates, but without actually using the uncertainty in an adjustment.
> Because of this, the idea of RTK vectors is not a high priority to me, as it seems to be for you, which is why I suggested you obtain an RTK system and conduct the experiments for yourself as you will be able to conduct the tests exactly to your specifications.
This is why I've raised the topic to begin with. The sophisticated users of RTK who have described importing the RTK vectors with their covariances into least squares adjustments or logging observables and post-processing PPK are really getting a much more useful answers than the folks who are just settling for numbers that they they *believe* to be good.
> Why not just post it here? There are others interested in this problem who also have the means to use the information.
I'm not aware of a convenient way to upload files to this site. I suppose I can post a link to a cloud storage service.
> So, it sounds as if you are dealing with uncertainties in RTK vectors using the (c) method I previously asked about, assuming that given two or more sets of coordinates on the same point obtained via RTK that the mean cannot be in error by more than the difference in the coordinates, but without actually using the uncertainty in an adjustment.
No. Not really. Based on observation, I can show that RTK positions have a repeatability of about 1cm horizontally and about 2 centimeters vertically (2 sigma) with baselines <1 mile. The repeat observation on critical measurements is for verification of this. I would not say that two RTK points that fall closer than this "cannot be in error by more than the difference". I would strongly disagree with this assertion.
> This is why I've raised the topic to begin with. The sophisticated users of RTK who have described importing the RTK vectors with their covariances into least squares adjustments or logging observables and post-processing PPK are really getting a much more useful answers than the folks who are just settling for numbers that they they *believe* to be good.
Much more useful?
> "The missing bits are the DX, DY, and DZ components of the vector..."
StarNet Pro will import that data as vectors with covariances and you are in the LS business. If you would like to email me one of your data files I'll convert it for you so you can see what it looks like.
> > Why not just post it here? There are others interested in this problem who also have the means to use the information.
>
> I'm not aware of a convenient way to upload files to this site. I suppose I can post a link to a cloud storage service.
A g-file will be short, maybe six or seven lines of ascii text. This site can easily handle posting ascii text of that length. Less than 20 g-files won't kill the bandwidth and everyone will be able to download the data.
> > So, it sounds as if you are dealing with uncertainties in RTK vectors using the (c) method I previously asked about, assuming that given two or more sets of coordinates on the same point obtained via RTK that the mean cannot be in error by more than the difference in the coordinates, but without actually using the uncertainty in an adjustment.
>
> No. Not really. Based on observation, I can show that RTK positions have a repeatability of about 1cm horizontally and about 2 centimeters vertically (2 sigma) with baselines <1 mile. The repeat observation on critical measurements is for verification of this. I would not say that two RTK points that fall closer than this "cannot be in error by more than the difference". I would strongly disagree with this assertion.
Okay, so your opinion is that the repeatability of RTK positions is not a measure of their uncertainties? Haven't heard that one before.
> > This is why I've raised the topic to begin with. The sophisticated users of RTK who have described importing the RTK vectors with their covariances into least squares adjustments or logging observables and post-processing PPK are really getting a much more useful answers than the folks who are just settling for numbers that they they *believe* to be good.
>
> Much more useful?
Yes. Much more useful.
>Okay, so your opinion is that the repeatability of RTK positions is not a measure of their uncertainties? Haven't heard that one before.
It's no different than if my EDM repeats a measurement to the thousandth of a foot or my 3 second instrument repeats an angle to 1 second. I would not say "the error of this measurement cannot exceed the difference between the two measurements". I would say that the repeat measurement suggests that the measurement errors for those measurements are within the error estimate for those measurements.
> >Okay, so your opinion is that the repeatability of RTK positions is not a measure of their uncertainties? Haven't heard that one before.
>
> It's no different than if my EDM repeats a measurement to the thousandth of a foot or my 3 second instrument repeats an angle to 1 second. I would not say "the error of this measurement cannot exceed the difference between the two measurements". I would say that the repeat measurement suggests that the measurement errors for those measurements are within the error estimate for those measurements.
Except that the standard errors of angles and distances measured with a total station in fact can be well estimated from repeat measurements. The only qualification is that the repetitions are made under conditions tending to cancel certain types of errors known to be present in the electro-optical system, such as measuring directions on different parts of the circle or choosing multiple intervals for distances with some geometric contraints such as that all of the intervals lie on the same line.
Wanting to consider RTK a sort of GNSS "total station" is not a very good analogy, even if the GNSS salesfolk may still be using it.
BTW when you convert the RTK vectors to g-file format, be sure to select the "D" line option, i.e. standard errors and correlations, not the "E" line.