Activity Feed › Discussion Forums › GNSS & Geodesy › One part per million
-
One part per million
Posted by hpalmer on March 15, 2019 at 1:01 pmaccuracy of gnss sensors are generally expressed as a fixed amount plus 1 or 0.5 ppm (rms). I recall these are the same values from 20 years ago.
Why are static and rtk accuracy for sensors different?
Where did the 1 ppm come from in RTK? and why different than 0.5ppm for static?
plumb-bill replied 5 years, 6 months ago 9 Members · 11 Replies -
11 Replies
-
Lots of testing, I will have to say that the accuracy statements for the R-10 are valid from my experience.
RTK will be slightly less accurate simply because there is less data to process.
-
Static involves a lot of repetitive observations. Start with an understanding of statistics before trying to understand GNSS.
Paul in PA
-
was the 60’s and statistics was dull but logic was fun.
I still want to know where the 1/1 million came from and why most manufacturers produce the same relative accuracy for their sensors.
I don’t think they are talking about observations as lots of other factors are involved (antenna, environmental) including other stuff way over my head.
-
I think it has to do with initialization.
The Static processor initializes every observation independently.
The Kinematic processor does one initialization for a series of observations. The initialization is valid as long as lock on the satellites is maintained.
edit: fixing iOS stupid edits.
-
Posted by: hpalmer
accuracy of gnss sensors are generally expressed as a fixed amount plus 1 or 0.5 ppm (rms). I recall these are the same values from 20 years ago.
Why are static and rtk accuracy for sensors different?
Where did the 1 ppm come from in RTK? and why different than 0.5ppm for static?
The drift in accuracy stated in PPM (as distance between base/rover) exist because there may be different atmospheric conditions at each location. This, in turn, results in a different propagation of the signal waves through the atmosphere. Static sees a lesser impact, because a good static processing engine (TBC for example) can analyze and control for different variables and outlier observations in a much more robust way than just RTK. It can shape the solution using a statistical analysis. An RTK accuracy estimate on a data collector is usually only a simple averaging of a lot of little data sets – which can include some that aren’t as good as others. I have definitely seen PPM play an impact when there was different weather at a VRS site than where I am surveying – usually the vertical will drift out first. Then it’s time to set up a local base (which works the best usually because the major variable cancel out – seeing as both base/rover are effectively at the same location).
PPP is a horse of a different color – and these days engines like OPUS and Trimble’s RTX are rather impressive with the accuracy and precision they can accomplish. Through quite a bit of testing I’ve found RTX to be more precise than OPUS – but not more accurate (in any way I could evaluate with standard surveying equipment).
-
Agreed. The RMS values on the data collector can be a different story sometimes, but usually only if it starts creeping over a tenth. Sub-tenth they’re pretty reliable.
-
Those numbers must apply to observations of some lengths that need to be specified, and undoubtedly very different ones.
. -
An RTK accuracy estimate on a data collector is usually only a simple averaging of a lot of little data sets
I’m told that Javad averages the weighted value of each epoch. I wouldn’t be surprised if that’s true for others as well.
-
GPS is actually a very crude tool. It does not measure between stations. The measurement is a very poor absolute and independent position at each location. The vector is simply an inverse between those positions, corrected using the assumption that at any given moment the signals in small areas are influenced in like manner. On larger areas the influences may be modeled and accounted for.
RTK uses the known error at one station to adjust the position of the roving station, and models out the influences (to a degree) in real time. Doing it in real time demands limited correction and simplified data storage.
Static has the advantage of applying more precise satellite location. It also stores and uses more complete data over longer periods, generating a more precisely screwed up position. It’s still an inverse between independently measured positions..
-
The distance is twice as long with rtk as it is with 2 static positions. Hence, twice the ppm error. Pretty sure the rtn services do list differing ppm for differing distance ranges.
-
While attending a seminar on Trimble VRS I heard one of the Trimble engineers that works on their GNSS describe that the performance can be better or worse than 1ppm depending on the relative atmospheric conditions at the base or rover. This would lead me to believe that these numbers are a statement of expected accuracy based on a lot of testing. For example, I’d expect sometimes you’d see better than 1ppm in Arizona, and worse in the Pacific NW depending on weather conditions.
This is also why they tout the virtual solution as better in the VRS scheme, but I usually see just fine performance connecting to a single mountpoint and can store the vectors that way.
Log in to reply.