that is why the software manufacturers tend to use that method, it makes the results look better.
> it is a misconception, but its not bizarre, in fact it is common.
Well, when you consider that Relative Positional Precision is explicitly defined in the ALTA/ASCM spec to which it pertains as the being the semi-major axis of the relative error ellipse between the two points, I'm afraid wanting to believe that RPP means anything other than that IS bizarre.
Example of Relative Positional Precision - RTK
>the Relative Positional Precision between L001 and L002 is going to be SQRT[0.050^2 + 0.062^2] = about 0.08 ft., not 0.054 ft. as displayed.
Once again, that is not how relative positional accuracy is computed.
From your star net observation above: if it is truely SQRT(.047^2+.052^2)then the relative positional accuracy is .053'. Why did star net give you .07'?
You weren't just button pushing were you?
Also Javad's output is at one sigma and you are calculating a quasi-relative positional accuracy at two sigma. This is not an accurate comparison by any means, and defiantly not a proper basis to dispute this data.
> You have good selective hearing. How about this: you do a static survey using CORS station to tie to NAD83.
I love the way that you're wanting the scenario that Shawn described to be something that it obviously is not. The uncertainties of the two points that Shawn surveyed were a few hundred feet from where his RTK base (obviously not a CORS station, as you can easily verify from the position he posted) were the relative uncertainties with respect to the base.
The freakishly obvious conclusion by inspection (that you've wasted bandwidth disputing) was that the relative uncertainty of the two rover points could not have been less than 0.03 ft., as Shawn's screenshot showed. Why was that? It was because the smaller of the semi-major axes of the relative error ellipses of the rover points with respect to the base was 0.047 ft. and so, since both ellipses were similarly oriented, with long axes approximately parallal, the size of the semi-major axis of the relative error ellipse between the two points was the root sum of squares of the semi-major axes of both, i.e. SQRT [ 0.047^2 + 0.050^2 ] = 0.07 ft.
So, practically by inspection, the Relative Positional Precision of the two rover points was 0.07 ft. In other words, the Javad software failed to point out that the survey was within a razor edge of failing the ALTA/ACSM spec as it was and would definitely fail the spec if the processor-generated uncertainty estimates actually need to be scaled for realism, as is typically the case.
Your digression about Network Accuracy is irrelevant to the example Shawn posted, but if his RTK base by some magic had a Network Accuracy value of zero, the Local Accuracy of the two rover points positioned from it would still be 0.07 ft.
:good:
Example of Relative Positional Precision - RTK
> I am not so sure about scaling the variances, The covariance matrix is what results after scaling the statistics from the LSA by the a posteriori reference variance. correct?
No, typically GPS vectors are exported from the processor with weights in the form of a covariance matrix. That expresses the uncertainties in the differences in X, Y, and Z contained in the vector. Some manufacturers produce vectors in which the uncertainties are expressed as standard errors and correlations rather than as variances and covariances, but the two forms contain the same information.
What one typically finds when GPS vectors are adjusted in redundant combinations, even in just repeated occupations of the same point from the same base, is that if the processor estimated variances and covariances are used to weight the vectors, the residuals are too large to be consistent with the processor estimates. The scalar that must be applied to the processor estimates varies little from project to project when the methods used are similar. So a valid approach is just to apply an average value of the scalar and verify that the GPS residuals aren't excessive, i.e. pass the Chi Square test.
The advantage of this approach is that the uncertainties of non-redundant vectors can be estimated by applying the same scalar and you will tend to get a more realistic result on projects without high degrees of redundancy.
>Having a short observation period will certainly impact the statistics but the redundancy of the adjustment, (ie short or longer observations) is already accounted for in the covariance matrix if it is properly built
The covariance matrix is simply a statistical measure of the scatter of positions of the rover during the number of epochs that it occupied a station. The statistics of the variations in the external factors that contribute to that scatter tend to show variations over longer periods than the relatively short times that RTK occupations take up. For example, in the two examples Shawn posted, the occupation times were only 1 and 2 minutes.
Example of Relative Positional Precision - RTK
> Once again, that is not how relative positional accuracy is computed.
> From your star net observation above: if it is truely SQRT(.047^2+.052^2)then the relative positional accuracy is .053'.
Okay, so you're posting that you aren't able to compute the sum of the square of 0.047 and the square of 0.052 and take the square root of that quantity? Perhaps some other poster can explain to you how to do that. The answer will be 0.07.
I've got to ask: are you one of the experts Javad has relied upon in developing his latest line of equipment, by any chance?
Rambleon - scalars
> > I am not so sure about scaling the variances, The covariance matrix is what results after scaling the statistics from the LSA by the a posteriori reference variance. correct? Having a short observation period will certainly impact the statistics but the redundancy of the adjustment, (ie short or longer observations) is already accounted for in the covariance matrix if it is properly built...please explain...
>
> Good point. Bit of a detour here, but it is going to come up eventually, especially if some exported data is examined.
LOL! Actually the scaling of variances was the POINT OF DEPARTURE for the discussion that has run across at least three threads so far. The original question had to do with testing RTK gear to determine whether or not the processor-estimated variances were optimistic or realistic.
"Reality Factor" is actually a pretty good description of the scalars that processor estimated variances require to make the realisitc. It highlights the nature of the exercise and the end of adjustments: to get estimates of quantities such as directions and distances that are as close to reality as possible and to express the uncertainties in those quantities in ways that also resemble reality.
> Shawn, I think this thread and topic will be invaluable if you are able to help the Javad software folks implement some of the suggestions Kent has made. Despite his attitude, you have managed to squeak some useful suggestions out of him. Bravo! I don't think I would have had the patience for that.
Already put together some recommendations. Criticism is helpful. I'm not afraid of it.
Rambleon - scalars
> Funny you should focus on my comment with the word detour in it and nothing about how such factors are derived.
The uncertainties in measurement processes are best estimated by comparison with reality, which in practice means comparison with measurement processes of superior accuracy. So, comparing RTK vectors to RTK vectors is not an exceptionally efficient or reliable means of doing this.
Since it is possible to survey an array of control points over a relatively small area with accuracy much better than RTK vectors typically have in ordinary land surveying use, the obvious reality test is to survey an array of control points that have relative positional uncertainties well below 5mm 2-sigma and connect the array to some base points at separations that represent the normal range of use. If you customarily use single-base RTK at distances of 10km, then make one of the base points 10km distant from the test array.
The array should be sited to duplicate the sort of obstructions and multipath sources normally encountered. If the user expects to use his or her equipment on rural sites without any obstructions at all, then that won't be an issue.
Use multiple static GPS vectors to determine the relative position of the base points to the test array, taking all precautions to get the best results than can be had and to get reliable estimates of the uncertainties of the base points with respect to the array.
In one combined adjustment by least squares, adjust both the static GPS vectors and conventional measurements that determined the geometry of the test array and base points.
Then, survey the test array points via RTK from the bases, taking care to use some uniform process similar to what is expected to be used in everyday work. Extract the RTK vectors with covariances and add them to the GPS vectors and conventional observations that control the test array. If the residuals of the RTK vectors pass the Chi Square test without any scaling, then the processor estimates of the RTK vectors are realistic. If they do not, then the next step is to increase the scalar to be applied to just the RTK vector covariances until they are realistic.
If the test array has been set up in a way that resembles normal use as far as obstructions and multipath sources, then any scalar derived for the RTK vector covariances ought not to be terribly far from what would be expected in normal work.
Because the factors that degrade the accuracy of RTK are complex, repeating the test over several days under different constellations and space weather would be worth doing.
Rambleon - scalars
> Back to the scalars. Excellent test proposal, but say the data was points rigorously established by (more than one method); long static, closed traverse, digital level run w/invar rods et al. once the data is gathered...
The idea is simply to make a survey of the test network by means that are known to have well characterized random errors and that can produce relative uncertainties much smaller than those claimed for the RTK to be tested. The processor estimated uncertainties are the obvious starting point. So, if one is testing RTK vectors with processor-estimated relative positional uncertainties (semi-major axis of 95% confidence relative error ellipse) of about 2cm as was the case with Shawn's example, then a test network with relative positional uncertainties of 5mm will be sufficiently better to make a reliable test of the RTK vectors.
>the question is the exact mechanics of how a scalar is chosen... how does one remove all "hunch"?
There is no subjective factor in surveying the test array and arriving at the formal estimates of the uncertainties of the array points. All of the measurement processes by which the array is surveyed can be invididually tested and verified. This assumes, of course, that the surveyor will have already tested his or her conventional equipment and so will already have reliable values of the standard errors of angles, distances, and centering to use in the adjustment of the test array and the evaluation of relative positional uncertainty. The uncertainties of repeated static GPS vectors processed using precise ephemerides should hardly be some unknown territory. They are also going to be much smaller than the approximately 2cm uncertainties RTK vectors.
Rambleon - scalars
I can envision what you are describing and I think I might set up a few test areas at various ranges similar to what you have described. I'm thinking 100' x 100' squares, nominally, at a short <1000' from the base, at about 1 mile, at about 3 miles, at about 7 miles. Mix total station and static GPS to determine the squares. Then locate the squares with RTK, perhaps using various observation times to see if the error estimates require some minimum amount of time to reliably estimate the accuracy.
Until then...
From yesterday's data, this job was a little as-built of a lot in a subdivision. I loaded the RTK to quickly find monuments (or prove that the monuments we were looking for had been removed by utility construction). I collected measurements on 5 points at the lot (besides the two main control mentioned above). The five points were also measured with total station. Today, I performed a best-fit transformation of the total station points to the RTK measurements. This doesn't settle the question of the overall network accuracy of the points relative to station POST, but does shine some light on the (arguably) more important issue of local network accuracy. In this case the local network is the lot survey.
This is the report from Carlson Survey. The rotation and translation in this case are not very important. The initial total station survey was oriented to a 175' baseline (between RTK points L009>L008). The dX and dY values are based on 41" rotation from 0,0 (rather than the centroid) so the 2-3 deltas look worse than they are. The thing to look at here are the residuals - the difference in RTK coordinates and total station coordinates. All but one of the residuals is below 0.04', this exception was under a fairly thick 15 foot tall juniper tree in a barbed wire fence. I was only able to record 50 epochs (without waiting for the receiver to regain fix and finish). I measured this point precisely because it was ugly, to better equip myself to judge difficult shots and the likelihood of success or failure.
From here, I also compared some inverses. Recall this is all from a base station that is approximately 3 miles away. (Positional Accuracy is computed from 95% confidence semi major axis of both points, L points are RTK, T points are total station):
L006>L005 N 78°43'12" W HD: 553.612 VD: +4.314 Positional Accuracy 0.062
T003>T004 N 78°43'15" W HD: 553.637 VD: +4.341
L009>L008 N 81°09'56" W HD: 175.673 VD: -0.790 Positional Accuracy 0.063
T001>T002 N 81°09'15" W HD: 175.685 VD: -0.719
L008>L007 S 06°50'05" W HD: 240.611 VD: +1.331 Positional Accuracy 0.081
T002>T083 S 06°50'31" W HD: 240.687 VD: +1.396
US Survey Feet
The pairs for L006>L005 and L009>L008 both show fairly high positional accuracy compared to the observed residuals. Point 7 was under the juniper tree. It showed a semi-major axis of 0.067 (the others were about 0.04) which would seem to indicate that the error ellipse was able to reasonably predict that this point was less accurate. The positional accuracy of 0.081 is in close agreement with the residual of 0.082.
It would seem that the error ellipses generated by the receiver are perhaps pessimistic in open terrain and reasonably accurate with points with obstructed sky view. This is anecdotal. What Gavin describes about determining weighting based data from thousands of observations would tell the story. But what I've seen so far doesn't suggest to me that there is some grossly deflated error estimate being generated by the Javad receiver. In fact, quite the contrary, it seems to be bordering on pessimistic.
Rambleon - scalars
> I can envision what you are describing and I think I might set up a few test areas at various ranges similar to what you have described. I'm thinking 100' x 100' squares, nominally, at a short <1000' from the base, at about 1 mile, at about 3 miles, at about 7 miles. Mix total station and static GPS to determine the squares. Then locate the squares with RTK, perhaps using various observation times to see if the error estimates require some minimum amount of time to reliably estimate the accuracy.
Well, for starters, you want at least twenty points in the test array (this is for statistical purposes) and you want to have some multipath sources and partial obstructons. The way that it first occurs to me to accomplish this is to arrange the array around a tree in an area that is otherwise open, locating, points along rough radials from the center of the canopy of the tree, the nearest as close to the tree as a prudent user would use RTK and the most distant so far away from the tree that the sky is open above a 15 degree elevation mask. Probably five points along each cardinal direction from the tree would work. This could probably be done inside a square about 300 ft. on a side.
The conventional survey of the test array could just consist of occupying each of the four midpoints on the radials in turn and taking distances and directions from them to the other 14 points visible and two more distant points for azimuth orientation. This is where pre-planning error analysis in a least squares survey adjustment program like Star*Net would be invaluable to determine how to most efficiently meet the uncertainty target for the test array.
At any rate, you'd have at least three directions and distances to each control point on the array, which would be more than adequate for an adjustment by least squares that would trap any blunders and give a very good set of relative coordinates for the array.
> What Gavin describes about determining weighting based data from thousands of observations would tell the story.
Except thousands would hardly be necessary. If you had twenty well surveyed points in the test array, those twenty would be more than sufficient to test the processor estimates of the RTK. I'd think that repeating the RTK vectors on the same day under a different constellation would be more than adequate to test the internal consistency (inner precision) of the processor estimates. Naturally the control values of the test array would measure the outer precision of the RTK vectors, their correspondence with reality.
Rambleon - scalars
> Base-Rover RTK still kicks booty over very short baselines over network though.
Yes, if a surveyor is only using RTK over baselines less than 100m, the correlation of topo and ionosphere effects at both base and rover should give results that are a very poor prediction of RTK accuracy over much wider separations. This is why the test should be tailored to how a surveyor actually expects to use RTK, i.e. over similar distances from base to rover and in similar settings. Otherwise, the results won't resemble everyday reality nearly as much as they could.
http://www.eng.usf.edu/~tdavis/resume/confidence%20regions.pdf
It appears that documenting the 95% confidence error estimate with 2? is not always appropriate as the 95% confidence is not always twice the standard deviation (or 1?), which is actually only the case for 1D.
The attached pdf shows the required scalars on page 13 for 95% confidence determined from standard deviation. Also in this paper, there are several instances in which the ? is used for a 95% confidence (see pages 1-3).