Please or Register to create posts and topics.

GNSS RTK Accuracy

Page 1 of 4Next

I'm trying to get a solid handle on exactly what manufacturer GNSS RTK accuracy specs actually mean and was hoping we may have an expert here that can help clear things up.

Most manufacturers quote their "accuracy" using RMS and usually with a few asterisks about being free of multipath, satellite geometry, atmospheric conditions, ect ect.

I reached out to a few of the big guys (Leica, Trimble, ect) and the ones that responded said they're using ISO17123-8. Fair enough, but in that set of standards they clearly state that this testing procedure is a means of measuring precision, not accuracy. Which makes much more sense to me but lets move on from that for now.

The ISO basically sets out a methodology of measuring two near points (within 2-20m from each other), inversing between the measured points to derive a delta Hz and delta Z and comparing those values with much more precise conventionally measured values (to within 3mm to be specific). And of course doing this many times, with ample time in between observations to allow for changes in satellite configuration & variations in ionospheric and tropospheric conditions. And as I understand it, the observations are average and a RMS is calculated off of that mean (correct me if I'm interpreting the ISO wrong please). So lets say we have a value of 1cm + 1 PPM (horizontally) and lets ignore PPM for now. That 1cm should denote that 68% of single epoch measurements should land within a mean in both the positive and negative direction. So a span of 2cm. Bump that up to 95% (which is much more appropriate for surveyors to use) and we're looking at a span of 4cm horizontally. This should just be the span at the receiver head indicative of the precision. For now we are ignoring all other sources of error such are how well the HT was measured, how well the rod was centered and levelled, the error at the base position, ect ect.

I believe the issue arises using RMS because that would assume deviations in RTK observations are gaussian, which they are not as I understand? There for this is not an accurate way to measure precision. But a better way may be to look at the span of single epoch measurements over an extended period of time.

To get an idea of absolute accuracy for that ISO standard we would need to either run it on a control network oriented to grid north and tight to 3mm & take a holistic view of all other sources of error & for relative accuracy at least compare the vectors between the base and rover points to known (as best as we can) values.

This is mostly just me thinking out loud and I could very well be wrong in several places here, I'm looking for any opinions on the matter. If at all possible, could you please link literature that supports your argument? I would be extremely thankful as it's tough to get a verified answer on this stuff. The manufactures seem to not want anyone digging into this too much as when I started asking questions I got radio silence.

One other question I had was, if I take an RTK observations of lets say 30 seconds, is the data collector doing anything more than just averaging those measurements? For example is it weighting them based on an approximation of their quality? And if so, how is that approximation calculated? I would assume using something along the lines of the geometry of satellites, the amount of satellites, there has to be other measurables too, does it know the affects of ionospheric and tropospheric conditions at every moment?

BC,

Your thoughts are spot on. The routines used to express 'quality' of RTK observations are generally abuses of statistics tainted by business risk ideas and a touch of philosophy.

I have found nearly every major manufacturer is overly optimistic by a factor of about 2.5. Leica even used to admit that in their user manuals.

Enjoy your quest and keep sharing, Tom

Fair enough, but in that set of standards they clearly state that this testing procedure is a means of measuring precision, not accuracy.

That's how it works with all measurement systems - repeatability (precision) is the only standard by which they can really be judged or certified to. Manufacturers don't know what datum or control users will be measuring, nor can they assume anything about processing and adjustment methodology. The manufacturers don't know whether you're using a fixed height with sandbags or a janky aluminum tripod in high wind with an unadjusted tribrach. All they can do is measure how well that receiver can replicate the same position at the APC.

One other question I had was, if I take an RTK observations of lets say 30 seconds, is the data collector doing anything more than just averaging those measurements? For example is it weighting them based on an approximation of their quality? And if so, how is that approximation calculated?

I can only speak to Trimble myself, but my guess is that others use similar methods.

When a measurement is initiated, there are two conditions that must be met - time and number of measurements. The first is pretty simple - the receiver must measure for the amount of time designated in the survey style.

The second condition depends on whether the user has specified "auto tolerances" or has manually set the tolerances in the survey style. If auto tolerance is selected and the operator is using a Trimble receiver, Access knows its precision specifications and will enforce those limits on the data points being collected epoch-to-epoch. So if you are set to collect 30 epochs for an observation with an R12i, and after 10 epochs the data points begin to fall outside the default tolerances, it will throw that "poor precisions" or "position compromised" warning, and prompt you to either store (using the "good" in-tolerance positions) or remeasure.

If the user chooses to define tolerances manually, Access will use those rather than the receiver specs.

The position quality you see at the top of your screen while walking around withthe rover is an estimate only - the proof comes when you initiate that measurement and it starts comparing epoch-to-epoch. That's why the Rapid (instantaneous) measurement method can be so dicey - the quality of that observation is an estimate only, because it's a single data point without anything else to compare it to.

does it know the affects of ionospheric and tropospheric conditions at every moment?

From what I understand, the top manufacturers do incorporate ionospheric modelling into RTK engines, but usually all they can do is use a predefined model. Some budget receivers have no modelling at all.

Tropospheric effects are weather dependent, so practically speaking impossible to model...

(Edit to add: when I choose to display position quality at 2-sigma level, I rarely find it to be overly optimistic if I am working in decent GNSS conditions. My experience is that newer receivers are less optimistic than in the past. Better RTK engines with better and faster real-time testing. The biggest problem with RTK observations is that they are rarely separated by enough time to get a different constellation, and that by far is the largest source of correlation and subsequent over-optimistic estimates of positional error.)

Rather than comparing what the manufacturers say why don't you ask for a demonstration of each at a calibrated baseline. I have taken my Leica to the Tucson baseline and the results were impressive (using a bipod and repeating each shot five times so the unit uses the average, which is my standard procedure for a control point).

In my opinion repeating with a different constellation has not been relevant for years.

It's hard enough to get a reply let alone a site visit but that would be ideal.

What has been stated by you and others are very well articulated. I am not so sure my mind can appropriately get the words out but i will give a few bits. Absolute Accuracy for me with GNSS is about absolute positioning. In other words anywhere on the planet earth how accurate can I position a point. I guess our accuracy on the Land Surveying side would be to the Datum itself or to any standards set like the actual distance defined. So the calibration baseline that has been certified. If the great almighty leader declared a new unit of measurement like to define something like us foot or international foot meters (been actually a few different ones of those as well). Then that would determine a standard in which we shall call truth accuracy in which we should measure to. Unfortunately I know a few states regs that for the total station edm its fine to just take it in for a calibration no need ti go to a baseline in which goes back to the statements above. We have precision not accuracy unless we test it against the truth. I can remember having an invar tape and rods that were certified and as a USMC geodetic surveyor taking our steel tapes in and getting them checked. Which meant at different readings on that tape we had an extra correction to make besides temp swag tension etc.

For someone who has surveyed and also spent time doing GPS orbits and have literally watched and monitored the gps constellation 24/360. If you want accuracy you must take into account the constellation change period. Yes we have multiple constellations now not just one. Our predictions on the gos constellation have become superb. I also have a decent handle on other systems glonass etc . Accuracy comes from that constellation change . I get tickled looking at rtk observations and then traverses. Someone goes out VRS a point twice back to back 30 seconds or 5 seconds or 180 seconds epochs and say man my X brand is within x. Then traverse doesn’t close. I go out same exact points use c brand. Observe 180 epochs wait 4 hrs observe again.my spread is more. Take and adjust those apply same traverse data bam it closes. Lets say you set one pair on one end of a traverse. By the time you get to the other end its been 4hrs it could be two or 8 you position those. So first pair repeats and looks good 2nd pair looks good but do they relate to each other. Only by the stations that are in the network rtk operating stations. If you were in a different triangles you have the station error themselves. NGS CORS Use to only update a position once it gets about 2cm outside the value. They may have tightened that up I haven’t asked in several years. Time and redundancy is your friend with rtk base and rover and VRS or Smartnet leica topnet etc. leica pushes the 30 second thing really hard vs trimble having the 180 epochs. Not saying you can’t get and be good at 30 seconds. But A lot can go wrong in 30 seconds that cannot be caught. I do vrs base rover a lot. I use robot to test not just between two points but third or ev4 th points by various methods. For Trimble which is what we use and i have used leica in past. All my personal test for me for control and boundary corners gets a minimum of 180 and a moved base location with minimum of 1 hr but thats not really reality because i am usually pushing 4 hrs doing mapping or something else on that job so usually not done in an hour. But a 2nd observation. I truly like 3 but job scope and such doesn’t always allow for that.

Troposphere modeling is possible to an extent. But one must have a large enough network area coverage or being fed into the network rtk system by other means. Trimble did this quite well. Geo++ i saw not so good results as the weather changed aka a storm moving across the state. Now geo++ did some other amazing things the Trimble couldn’t do as well or at all. But that’s been in early 2006 -2008 time frame when we operated both systems simultaneously from same stations. And a third but it was not ready for prime time.

Now i say all of that and this past week i was looking for control around some sediment ponds. In mountainous area. I simply took r12 out no base no vrs. Just raw measurements and was doing navigate to point and hit 12 points less than .3 ft hz and about the same vertical on a job. I had wanted to just get withen 10 ft to grab mag locator. But when the tip of the rod hits an iron pin with cap that was a few tenths below surface and no mag locator required was blowing me away. And we do not have the subscription for that centerpiint x or whatever but wish we did. Its not even cked in our survey style . I think we are closer than we can imagine to not needing corrections period. But a lot of money to be made with subscriptions to networks and having a base and rover. Lol

So I went out into the field today to start doing a bit of testing. I went out and observed two points simultaneously using a nearby rover and logged positions at 1 second epochs for just over 3 hours.

I'm actually pretty surprised but what I'm seeing so far.

I haven't run conventional ties between the two points yet so bare with me on that but I will update when I get back out there very soon.

For those 10,000+ observations at each point I'm seeing a standard deviation of about 0.006' in either horizontal axis and 0.015'-0.016' vertically. With my largest outliers of about a tenth in horizontal and 0.15' vertically, not bad for 10,000 measurements. When I start grouping observations into 5, 30, 60 second observations those numbers tighten up a lot. And when I plot the coordinate observations they sure do look like they are following a normal distribution curve. I did a bit of reading and it looks like my initial assumption that GNSS observations are not Gaussian wasn't quite accurate (source: https://www.gpsworld.com/gpsgnss-accuracy-lies-damn-lies-and-statistics-1134/)

I was basing my initial thoughts on a video about RTK observations where the surveyor ran a similar test but got very different results (source: https://www.youtube.com/watch?v=L41mxm2U_Y8&t=31s)

When I graph any of my three axis they do not look anything like what he is graphing. My graph is centered on the mean and doesn't trail off in any direction, stay there for a period of time, then make its way back. It's very well distributed around the mean with some noise. My normal curve verifying this.

Any ideas why we are getting such different results?

Thanks for the reply!

Everything you said has been echoed to me before by experienced surveyors such as yourself so I am not doubting it. Just relaying what I'm seeing so far from this small data set (more data to come)...

Based on these initial findings I'm seeing a very minimal change in accuracy between 30 second observations and 180 second observations.

And my positions wasn't changing significantly in any repeatable direction over a span of 3 hours, which when I look at the skyplot at my location via gnssplanning.com, the geometry of the available satellites changes significantly.

Any thoughts on this?

I'm going to repeat this test on a different day with our other brand of & also repeat it on two rover points both semi obstructed from the sky and fully obstructed. Then two more times when connected to a network via single baseline and network via VRS.

If anyone has any suggestions on additional tests the think I should run please let me know.</font>

Hey take that same data set. Grab 1 epoch at each hour mean it compare it to the mean of all 3hrs. Then take 10 epochs at each hour mean it compare it to both all data and the 1 epoch. Do 30 60 180 then repeat at half hour etc. set parameters of pdop rdop gdop vdop for the sample epochs. Do this again same spot shift the time so end of the 3 hrs do again. Combine the all epochs for first 3 hrs and the 2nd 3hrs compare all above. You could do this for a full 24hrs and the truth will usually lie around the 2hr to 4hr time frames. On average.

I would have to really dive end to your data set. But here are some things that are not often discussed like multipath is. Delts V. This is where the satellite is moved burst back into where it’s supposed to be vs where it actually is. It drifts. Unfortunately we are all taught no gravity in space but its just a small amount. At times unknown to us each satellite at minimum is sent a message to relay to us. This in simple terms is stop saying you are here and this is what time it is now say this is where you are and what time it is oh and how fast you are moving. All satellites are not given this message at the same exact time. Some might get this more than once a day. All get the new ephemeris everyday along with other information. This has proven to improve the user range error. So instead of measuring points on the ground with an uncertainty in its position we measure where the satellites are. Timing is another. Clock drift in satellites themselves the monitoring stations that relate them to us. The receiver you and i use clock is not that accurate. Now multiple ways to handle this issue to an extent for us or the manufacturers. It can be as simple as in a base rover setup of canceling out algebraically. Or modeling from both base rover and establishing an constant etc. a few more but they all have pros n cons. If we could solve for time in real time in the field more accurately we would be golden. Time seems to win in almost all cases throughout history. Rubidium mazers and other atomic frequencies standards all have unique strengths and weaknesses in timing. Of course cost and longevity as well. Satellites monitoring stations and at the user end. We all have time to deal with.

"if I take an RTK observations of lets say 30 seconds, is the data collector doing anything more than just averaging those measurements?"

When I posed this question to the Javad engineers, the response was that the predicted error for each epoch were considered in the result.

Modern RTK results in perfect conditions are pretty reliable. It's when the conditions are less than perfect that you have to be careful.

I'm fond of the Javad Triumph-LS display, which shows where each epoch is landing. In good conditions they group pretty tightly, in challenging conditions the spreads get looser. With the vertical, in particular, if the epochs trend up or down instead of varying around the average, you can tell that the result is not going to be very tight even after 240 epochs (which is what I normally do for control).

Probably most important of all is to be familiar with what your equipment does under what conditions, and that's just a matter of experience.

Page 1 of 4Next