Activity Feed › Discussion Forums › GNSS & Geodesy › Unrealistic Manufacturer’s specs

# Unrealistic Manufacturer’s specs

Posted by john-hamilton on August 28, 2019 at 11:18 pmThe specs for a Trimble R10 are listed as follows:

High precision static: 3 mm+0.1 ppm H, 3.5 mm+ 0.4 ppm V

Static and Fast static: 3 mm +0.5 ppm H, 5 mm+0.5 ppm V

RTK single base: 8 mm+1 ppm H, 15 mm+1 ppm V

VRS: 8 mm+0.5 ppm H, 15 mm+ 0.5 ppm V

I don’t believe any of these are realistic, I have done extensive testing and also have many years of experience that tells me otherwise. And they are saying VRS is better than RTK, I believe the opposite up to about 10 km.

bill93 replied 4 years, 5 months ago 9 Members · 10 Replies- 10 Replies

interesting observation. I am not sold on VRS or other networked services for short range RTK but I have not used VRS. I would like to see some objective test results on this issue.

On another note, I have found that I am able to obtain good results with longer RTK baselines with the new sv’s.

VRS is created from multiple base stations and is theoretically better than RTK which is from a single base station. There is nothing to prevent you from getting an OPUS (3 CORS) or OPUS-RS (up to 9 CORS) on your RTK base, which makes your RTK better. Many VRS outputs are using base stations that are not CORS and that may degrade the precision. What they say is true based on their methodology, but not the whole truth. VRS is also degraded if you are not within your VRS base polygon, per the OPUS-RS methodolgy.

Paul in PA

John, what has been your experience with R10 static precisions? I routinely see around 6mm reported (via RTX), and it seems to be reliable/repeatable (if we leave the base up over four hours) – but I usually don’t have the luxury of revisiting the same control point over years as you do.

Here is a good example…

I recently did a bluebook project where I had a base setup on a point I previously established for long sessions on multiple days (8 days), located 8 km from a CORS. The station is open to the sky, one utility pole located about 30 meters away, so very little to no multipath. We used a standard tripod with a precision rotating optical plummet, so centering errors are <1 mm.

I used the NGS utility compvecs to check the repeat baseline differences:

Using the specs for high precision static, I would expect 3.8 mm H and 6.7 mm V. For static I would expect 7 mm H and 9 mm V.

I also used Trimble RTX Centerpoint and OPUS to compute the coordinates of the base and compared them. Interestingly, it appears that there is a 2 to 3 cm horizontal bias between OPUS and RTX but only a 6 mm bias in height. All values in the table are in meters.

That is the general consensus of most of the surveyors in my area also. VRS is better than RTK, and RTK is better than static.

I’m reviewing some checks for a boundary through 6 sections, I seldom see any horizontal over .05′, taking the trimble RTK numbers of 8mm + 1ppm it does meet that standard. My base points range from 1/2 mile from a point to over 6 miles. We are not setting up tripods or even bipods over these points, we are holding the rod and keeping the bubbles in the circle as best as we can and are still meeting the Trimble specs.

The last check:

It was important to get to this point since the original coordinate was copied from a survey done in 2011. The base point it was originally located from is not the base used for the new job, the new base is 4 miles north of this point, the original base is 1.5 miles east. This gives me an error budget of 17mm horizontal, using the Trimble standard, which the check meets.

Clearly my locations aren’t scientific, nor should they be used to validate Trimble’s accuracy statements, but I am seeing results that are similar to their specs.

I honestly think doubling those numbers are gonna get you in a more real world result and being in ideal conditions as well. If I’m not mistaken those numbers are at the 68% confident level also. Gotta take it with a grain of salt The manufacturers have to post the best possible numbers they can.

The 68% confidence level, or one standard deviation, is a good choice for manufacturer’s published standards. When reporting the results of measurements, there are two errors that you can make. First you can accept a value that is actually substantially in error. Second, you can reject a value that is not substantially in error.

Using the 68% confidence level reduces the chance of accepting a value that is substantially in error. Expanding the confidence interval to 95% or 99% increases this chance, but it reduces the chance that you will reject a good value.

According to the situation, either a narrow or wide confidence interval is appropriate. Assuming that surveyors would rather remeasure when it may not be not be necessary than to report substantially erroneous data, a one standard deviation standard is a good choice.

Doesn’t the pattern of differences in the first table raise some concern of systematic measurement errors? Save for one dY value and two dZ values, they’re all negative. Also, the dX values get successively more negative through session 135 and then successively less negative for the final two sessions. Shouldn’t we expect a mixture of positives and negatives?

The second table, at first blush anyway, indicates that RTX and OPUS use different algorithms. It might be interesting to test the values to see if they could reasonably represent two random samples drawn from the same distribution. Statistically speaking, they could be considered to be equal.

One thing is certain, though. It’s hard to use statistics to prove anything.

Statistics will confess to anything if you torture them enough.

.

Log in to reply.