In a thread below, a poster quoted the different coordinates that had been logged on the same 13 boundary markers on two different days. Each day, the position had been the average of a total of 240 seconds of network RTK positions. Actually, there were more than 13 markers reported, but several fell under tree canopy, so I've omitted them from this analysis.
This table represents the N and E coordinate differences in feet, Day1 - Day2.
The s(pooled) values are the pooled estimates of the standard error of a single N and E coordinate value based upon those differences. A value of +/-0.022 ft. would be the estimate if both s.e.(N) and s.e.(E) are combined into one single, one-size-fits-all value.
So, what the above data suggests is that a coordinate derived from 240 seconds of network RTK observations would be expected to have a standard error of 0.022 ft. in N and 0.022 ft. in E if all of the other conditions are similar to the poster's survey.
Survey accuracy specifications typically deal with relative uncertainty of points positioned by a survey. So what would be the expected relative uncertainty of any pair of two boundary markers positioned with 240 seconds of network RTK observations if each N and E coordinate value has an uncertainty of +/-0.022 ft. (standard error) and the errors of the two pairs are independent ?
Well, for starters, consider the difference in the N and E components of the two points being analyzed for relative positional accuracy. Call those points (1) and (2), at N(1), E(1) and N(2), E(2) respectively. N(1) and N(2) each have uncertainties of +/-0.022 ft. (standard error). So N(2) - N(1) has an uncertainty of
SQRT[0.022^2 = 0.022^2] = SQRT(2) X 0.022 = 0.031 ft. (standard error)
Similarly for E(2) - E(1). That difference has a standard error of 0.031 ft.
So the relative error ellipse is a circle. The 95% confidence relative error ellipse would be a circle with radius 0.031 x 2.447 = 0.076 ft.
(A discussion of this is found on Page 33 of the Sixth Ed. of "Surveying Theory and Practice" by Davis, Foote, Anderson & Mikhail.)
The radius of the circular 95% confidence error ellipse would be equal to the semi-major axis (as well as the semi-minor axis since it's a circle). The ALTA/ACSM specification states that the relative positional uncertainty must be less than 0.07 ft. + (50 ppm x D), where D is the distance between the points considered. So, unless the distance, D, between the points is more than 120 ft. the specification would not be met.
Any uncertainty larger than 0.022 ft. in the N and E components of coordinate positions would obviously fail to meet the ALTA spec where the points are closer together.
Kent,
Another consideration would be: what is the returned sigma value of the coordinates presented. 1? 2? or 3? Software from different manufactures only produce measured coordinate values + or - .016' no matter what the distance is between points i.e. don't try to measure anything within 150' with RTK in a ALTA survey.
Pablo
> Another consideration would be: what is the returned sigma value of the coordinates presented. 1? 2? or 3? Software from different manufactures only produce measured coordinate values + or - .016' no matter what the distance is between points i.e. don't try to measure anything within 150' with RTK in a ALTA survey.
Pablo:
In that particular example, I was just estimating the standard error (sigma) of the N and E components of the coordinates that the poster logged from 240 seconds worth of RTK data. This was done just from the differences in the later occupations of the same points. There was some discussion of how to evaluate the uncertainties in network RTK results, but a fairly good test is surely just just to examine the statistics of differences in remeasurements of the same point at different times.
There were all sorts of extraordinary claims being floated for network RTK, but this test wouldn't support them very well.
>SQRT[0.022^2 = 0.022^2] = SQRT(2) X 0.022 = 0.031 ft. (standard error)
Somehow an "=" sign slipped in where a "+" should have been. Make that:
SQRT[0.022^2 + 0.022^2] = SQRT(2) X 0.022 = 0.031 ft. (standard error)
those residuals should be halved, shouldn't they? Isn't your table the difference between day 1 and day 2, therefore the best estimate of the position (without any other weighting) would be the mean of those values and the residuals would only be half of what you are reporting. So your standard error would be about half what you are reporting. I think you've over exaggerated the likely errors.
Depending on the RTK system (and they do vary) I've seen the good ones (Javad, Altus, and Champion, which is powered by Trimble) give epoch by epoch (not 4 minute mean averages) precision at about 0.02' standard deviation horizontally. I've seen it in practice as well.
I was a hold out on RTK, too, however observable results kept upsetting my skepticism.
> those residuals should be halved, shouldn't they? Isn't your table the difference between day 1 and day 2, therefore the best estimate of the position (without any other weighting) would be the mean of those values and the residuals would only be half of what you are reporting.
Yes, the variance for each point's N or E value was calculated as 2 x [(Day1-Day2)/2]^2 with degrees of freedom = 1. The values of s(pooled) were derived from the sum of those variances. I left out some of the intermediate arithmetic.
> So your standard error would be about half what you are reporting. I think you've over exaggerated the likely errors.
No, if you work the arithmetic, you should find that those values of s(pooled) are unbiased estimates.
> Depending on the RTK system (and they do vary) I've seen the good ones (Javad, Altus, and Champion, which is powered by Trimble) give epoch by epoch (not 4 minute mean averages) precision at about 0.02' standard deviation horizontally. I've seen it in practice as well.
Well, the obvious test is this one: reoccupying a variety of control points on successive days and computing the standard errors of the N and E components of the coordinates logged or testing the standard errors generated by the network RTK gear against the residuals from the adjustment of the various coordinate values taken at different times.
The single most relevant question is whether the results are actually meeting the minimum specifications or not, and those specs are typically for relative positional accuracy between various pairs of marks positioned by a survey, not point uncertainty.
What I don't get....
At the end of the day the found monuments control, so why the need for all these accuracy standards??? ....01 or .02, okay who cares?
You Would Need 4 Observations To Whittle It Down To Half
And in reality you need at least 3 to get a legitimate start.
The best you get out of 2 is "gross errors canceling out" per my old survey professor.
Paul in PA
This is a very good discussion and reminds us that we have to consider what accuracies we are trying to achieve and what are the best tools to use.
I can see Kent's points regarding ALTA Surveys. One might consider using GPS to directly measure boundary monuments on a more rural survey that has more ideal conditions for observations and longer distances between these monuments. But in a city or suburban setting on a smaller lot it would seem more reasonable to use conventional surveying techniques or some combination of Total Station/GPS.
I believe in practice this is what most would do.
I'm with Pablo on this one.
With RTK and RTN there is a time (many times) to leave them in the truck or at the office.
> With RTK and RTN there is a time (many times) to leave them in the truck or at the office.
If you have realistic estimates of the uncertainties of points positioned via RTK and RTN techniquies (the keyword being "if") and can adjust the coordinates with their uncertainties together with conventional measurements in a least squares adjustment, the obvious improvement is to connect marks that are so close together (say under 150 ft. for the above example as Pablo suggested) with conventional angle and distance measurements tying each of the close marks to at least two others.
The 95%-confidence error ellipses generated by the adjustment can be inspected to verify that they are within the acceptable limits of relative positional uncertainty.
This is a technique I've used for years to improve the accuracies of positions of markers close together obtained via post-processed kinematic (stop-and-go) techniques with accuracies similar to those deduced above for the network RTK positions.
Isn't this about the distance between 2 points and not the position of the point on the face of the planet?
> the relative positional uncertainty must be less than 0.07 ft. + (50 ppm x D), where D is the distance between the points considered.
If I set 2 points on a hard flat surface, a few feet apart; collect data on them with my RTN set-up; inverse between the calculated coordinates and check it with my pocket tape. The comparison is almost perfect.
Doug Casement
What I don't get....
Pin Cushion, original monuments in their original positions do hold. The issue is that certain types of surveys require measurements to be at certain orders of accuracy. This does not mean that you would move or 'pin cushion' an acceptable found mark, it only means that the distance you report that you measured between existing marks meet certain precision standards.
Kent is using ALTA published standards and standards for Texas in his example. The other issue you don't consider, is the measurement you use to set a monument where none was found. You need to measure from existing monuments, but you you should set them with due care and precision.
> Isn't this about the distance between 2 points and not the position of the point on the face of the planet?
Actually, the relative positional uncertainty is a function of the uncertainties in both distance and bearing. So checking the distance is checking one of two components.
> If I set 2 points on a hard flat surface, a few feet apart; collect data on them with my RTN set-up; inverse between the calculated coordinates and check it with my pocket tape. The comparison is almost perfect.
Probably a more realistic test would be to position one point in the morning on one day and the other in the afternoon of another day. Points right next to each other may also have nearly identical multi-path effects present, so that wouldn't be a very realistic test condition. If your entire survey can be done in the space of an hour or so, you wouldn't have to worry about the changes in the state of the RTN that would show up on different days at different times, but those would figure into surveys that stretch out over days.
You've probably discovered that even autonomous positioning gives unrealistically good-looking results if you occupy two points right next to each other in rapid succession. :>
I agree Zed. Not unlike the early days of EDM when distances less than 50 feet were supposed to be chained because the precision was better. Those days are passed. The same may be said for RTK at some point... for now there remains a line where terrestrial measurments are superior to GNSS measurements, but that line is getting faint.
Survey Errors and ALTA Accuracy Standards
> In short, this isn't the 1990's...
So, are we thinking that the methods of error analysis laid out above somehow aren't valid any more? :>
>... for now there remains a line where terrestrial measurments are superior to GNSS measurements, but that line is getting faint.
Surely you aren't seriously suggesting that the relative accuracy of conventional measurements over distances under 150 ft. won't be far superior to any commonly used GPS positioning techniques - and particularly RTK and RTN methods - into the forseeable future.
What is happening as RTK accuracy improves is that it is able to barely pass certain accuracy tests (such as the ALTA spec between markers 150 ft. or less apart) that it had previously obviously failed.
Survey Errors and ALTA Accuracy Standards
Well, not being of your caliber of knowledge, nor Kent's. RTK/RTN has always bothered me a bit. I can recognize that not just you, but a lot of super-intellectuals have come to their own comfort level in RTK, and they do know a lot more than me.
But comparing this technology to older tachnologies: Even though good careful leveling has been proven over the years, as well as bar-code leveling, I have a way of always checking my level runs in the field. Same with traverses with edm's or chains. I always double-check things enough to have a feel for whether I have any busts, or any climate influences have affected my work. And what kind of precision I have achieved.
Same with GPS work. Philosophically speaking, it makes a great deal of sense to check a measurement to a boundary mark on two different days under different constellation circumstances. No different than if I located a property pin from one mark with an angle and distance and checking it from another. (RTK) Measuring two points at the same time gives me a confidence level in that one vector, perhaps, but not in relation to all the other vectors or points in my boundary work. With rtk/rtn, all of your work is based on a global positioning system, and the sort of checks you need is different (In my mind) than for conventional surveying. It also makes sense to me that I have enough measurements in there to know whether I have met tolerances required for that particular job.
I, again, respect your higher level of knowledge on this matter, but I am concerned that many "ray-and-spray" surveyors out there will use similar arguments to do sloppy work, and not exercise due care for the task at hand. I have seen it with total station work and I see it coming with RTK.
I, and many others I think, need to know what their getting in their own mind, and we need to have comprehensive ways to know what kind of precision we have on any particular job.
Okay...just some thoughts on the subject.
Tom
Survey Errors and ALTA Accuracy Standards
Or possibly more directly to the point, if you're testing the accuracy of a system like network RTK that is essentially a black box, you don't make optimistic assumptions, but test as many parameters as possible.
To constitute an actual test, the statistics from repeat occupations of the same control points should be from positions obtained under conditions as dissimilar as possible and would have to include different time of day (state of ionosphere) and different constellations of satellites since those will actually vary over the course of a survey.