Here's the answer to the problem that dealt with the repeatability with which a survey marker can be replaced via some positioning process like network RTK that doesn't involve any tie to any other local reference point. I think you'll get a kick out of the most likely error.
The problem was put in this way.
QUESTION :
Suppose that :
1. On one date you determine the coordinates of some survey marker by network RTK or any other process that doesn't involve any tie to any other local reference point as conventional surveying typically does, and that process has uncertainties of, say, +/-0.02 ft. (standard error) in both the N and in the E values of the coordinates that you determine for the position of the marker. You determine the Northing and Easting in, say, your state plane coordinate system with uncertainties of +/-0.02 ft. in N and +/-0.02 ft. in E.
2. The marker is destroyed so completely that all that remains is the position that you earlier determined for it when it existed. You did a good job of determining the coordinates and that's all that's now left as evidence of where the marker at the corner was, +/-0.02 N and +/-0.02 E.
3. You are called back, say, a year later to remark the corner and use the same technology that can measure the N and E values of coordinates in nominally the same system with the same uncertainties of +/- 0.02 ft. (standard error) each. Say it's network RTK.
What is the most probable error that you will make in remarking the corner? That is, what is the most likely distance that the new marker you set with, say, network RTK that can measure coordinates with standard errors of, say, +/-0.02 ft. will be from the original position of the marker (of which no trace remains other than the previously determined coordinates)?
ANSWER :
The most likely error in the replaced marker is ... 0.04 ft! In 1000 trials, fully 25% of the replacements were in error by a distance in the range of 0.036 ft. to 0.044 ft.
Here are the results of the 1000 trials from a statistical simulation made by modeling the random errors in the N and E coordinates of the "as found" and "as replaced" markers in the problem using random numbers generated with a Gaussian distribution (mean= 0, s= 0.020). "D" is the distance between the as found and as replaced markers that the positioning device indicated had the identical coordinates, but with unknown random errors in those coordinates.
D(ft) Frequency/1000
0.004------1
0.007------3
0.010------10
0.013------14
0.016------29
0.019------40
0.022------59
0.025------76
0.028------74
0.031------81
0.034------97
0.037------73
0.040------105
0.043------71
0.046------60
0.049------45
0.052------47
0.055------30
0.058------19
0.061------18
0.064------18
0.067------11
0.070------9
0.073------3
0.076------3
0.079------1
0.082------2
0.085------0
0.088------0
0.091------0
0.093------1
Darn, we chained better than that and accepted what we found!
Ya know?
Keith
> Darn, we chained better than that and accepted what we found!
Yes, what that demonstrates is that nearly any method that uses local reference points ought to give a more accurate result than network RTK with that level of random noise in it.
I sure wish I knew what he said?
I know lots are gleefully embracing RTN but I just can't wrap my head around something where I have to rely on an Equipment Sales Dealer to provide critical measurements for my Survey.
Keith-imagine a system where the Chain salesman holds the zero end on a hidden point behind a tree and you only see the front end not knowing what is going on behind the tree.
Versus a system where you and your chainman actually directly measure between the two points you need to get the measurement between.
That is what he said.
> Keith-imagine a system where the Chain salesman holds the zero end on a hidden point behind a tree and you only see the front end not knowing what is going on behind the tree.
Fortunately, it's not as bad as all that, because you can get the salesman to hold the chain while you measure to points for which you already know the positions, as well as to points you already measured.
After a while you become familiar with what the salesman is doing by inference. You learn that when he mutters, "Ooh, that's a good one," the measurement is reliable, and that when he farts you want to recheck that point at a later time.
I just finished the field work for a survey using RTN. The job involves locating 6 groundwater monitoring wells located within an area roughly 25 km by 20 km. The area is nominally covered by a regional vendor's RTN, and is located inside the limits of a 2008 height modernization survey. The accuracy specifications are fairly loose (3 feet horizontal, 0.5 foot vertical), so I decided to rent a network rover (Trimble R8 GNSS) to see what the system can do. It was my first foray into network GPS.
I selected 10 height mod stations surrounding or within the project area, and ran 5-minute sessions on all the wells and control marks. (Everyone tells me that 5 minutes is overkill, but given the drive and setup/teardown times involved, it hardly mattered in this instance.) I observed 7 of the control marks on each of two days; for the other 3 and for all of the wells I observed on a third day also. Observation times were staggered to provide a significant change in constellation between observations.
Repeatability was impressive, with most residuals from the observed means at 5 mm or less in the horizontal components and 15 mm or less vertical. A few of the horizontal residuals were at the cm level, and some of the verticals were at the 3 cm level.
The RTN appears to be referenced to NAD83(2007). I say "appears to be" because I was unable to find anything on the vendor's website about network metadata, and the technical staff who handled the rental looked at me like I was from Mars when I asked about it. There's a "2007" in the network connection dialog on the TSC3, which is all the metadata I could find. (It makes me wonder what their regular customers think they're getting when they download the day's work.)
I still want to chase down the metadata for my report, but for the positions it doesn't matter -- I calculated mean north, east and up offsets from the observed positions to the NAD83(2011) control stations, and applied them to the observed values for the wells. The north and east offsets were all within a centimeter and a half of the mean, and the vertical offsets were all within 4 centimeters of the mean; plenty good enough for my purposes.
> And that is the difference between theory and practice.
>
> Made a few iffy assumptions that you might only know about if you were a user.
So, did you have anything to add to this thread other than griping about how I wasn't holding a network RTK receiver 2,000 times to do the statistical test that I did in an evening by other perfectly valid methods?
> Repeatability was impressive, with most residuals from the observed means at 5 mm or less in the horizontal components and 15 mm or less vertical. A few of the horizontal residuals were at the cm level, and some of the verticals were at the 3 cm level.
Just out of curiosity, what sort of uncertainty estimates did the controller report for the positions after five minutes of observations?
Statistical simulation
By the way, if any other surveyors are interested in doing similar analysis using Monte Carlo methods that require a source of random values of a variable with a Gaussian distribution and a specified standard deviation, here's a neat site:
Gaussian distributions at random.org
The way that the analysis proceeded in the case above was just to download 4,000 random numbers from a distribution with a standard deviation of 0.02, to model random errors in N and E values of coordinates with standard errors of 0.02 ft in the "as found" and "as set" positions.
You can get the random values formatted in four columns that can be slipped right into a spread sheet, each row containing the two sets of two random errors in N and E, from which the actual position differences can be calculated.
> Just out of curiosity, what sort of uncertainty estimates did the controller report for the positions after five minutes of observations?
It varied with location, but for the sites that were totally clear -- and almost all were -- it showed around 1 cm H and 1.5 cm V. I had one control station that was about 50 feet south of a dense row of tall trees, and that one consistently showed about 2 cm H and 3 cm V. I only had to throw out one observation, and it was at a wide-open site; it had a PDOP of around 10, so it wasn't a surprise (the salesman farted loudly that time).
Although I didn't carefully track the precision estimates throughout the sessions, my casual observation is that they settled down to their final range pretty quickly after fixing the integers -- perhaps in a minute or less. I wish how that I'd paid more attention to this for future reference.
There was one control station at which I never was able to get a fix. Cell coverage was okay, but I was within 1/2 mile (maybe less) of an active military radar antenna, and after maybe 15 minutes of frustration I just gave up on that one.
Aside from that and some cell coverage problems -- most of these sites were out in the boonies, and there were a couple of them at which I soundly cursed the Sprint MiFi -- I was happy with the experience.
> Although I didn't carefully track the precision estimates throughout the sessions, my casual observation is that they settled down to their final range pretty quickly after fixing the integers -- perhaps in a minute or less. I wish how that I'd paid more attention to this for future reference.
When you noted above that:
> Repeatability was impressive, with most residuals from the observed means at 5 mm or less in the horizontal components and 15 mm or less vertical. A few of the horizontal residuals were at the cm level, and some of the verticals were at the 3 cm level.
Were the residuals the total horizontal vector from the position to the mean or the differences between the N and E components and the mean?
cool. i think that means your first answer (mine) was correct. sorry id didn't express it in dist instead of dE & dN.
propagation of variances should be all you need for these type of questions. just be sure to write the correct models for the parameters you are solving.
> I know lots are gleefully embracing RTN but I just can't wrap my head around something where I have to rely on an Equipment Sales Dealer to provide critical measurements for my Survey.
FWIW - Although he's taken another job now, in the past the Trimble "Equipment Sales Dealer" you would have to rely on to "provide critical measurements for your survey" was also the former President of the American Association for Geodetic Surveying. Most salesmen didn't start their careers as salesmen.
those number seem plausible. Do they meet the standard? I would say in Texas they would (.10+100ppm), even for points two feet apart. For an ALTA survey... most of the time. Certainly extra care would have to be used to make sure points in close proximity were positioned inside the .07 error budget.
This was fun.
Cool. The 0.04 has been located. Smiling here. I have this mental image of this surveyor chasing his tail around trying to stake a point using RTN. Dang, its over there 0.06. Oops, now its north 0.05, shucks. Dang, I can't get there from here.
I will add that I have seen some RTK systems do quite a bit better than .02N/.02E. So it is entirely possible for some RTK systems to easily have results that even do better than the 0.07 ALTA standard at close range. Off the top of my head Javad, Altus, and Champion (using the Trimble receiver) all had standard deviations in testing that would suggest adequate precision for the task.
Could a total station measuring directly between two points 50' apart give a better accuracy than RTK? Sure. Is the difference in accuracy enough to disqualify the lesser? That's a matter of professional judgement, but I would say there are certainly circumstances which would render the difference trivial.
I'm still not entirely sold on RTN, but I'm open to it. There is a great deal of integrity monitoring that has to take place. Some seem well operated with sufficient metadata to know what datum and epoch to expect to be on. Others less so. My own experience has been so brief and limited to one single provider, I hesitate to offer any opinion.
> Were the residuals the total horizontal vector from the position to the mean or the differences between the N and E components and the mean?
The latter. I computed them in a spreadsheet from the coordinates returned by TBC.
It's probably worth noting that my unfamiliarity with the system leaves me wondering what kind of GPS survey I ended up with. When I pulled the job file into TBC, the points showed up with discrete vectors from individual reference stations, and I was asked to specify the desired geoid model. This was unexpected, because my (admittedly limited) understanding of RTN is that the network software creates a virtual base station at each rover position in order to model out the differences in iono delay between the rover and the actual reference stations. It looks kind of like I got single-base RTK solutions, but if that's the case I wouldn't expect the kind of vertical precision I got, since my vectors run between 9 km and 24 km, with most in the 15-20 km range. Another factor may be my choice of "observed control point" instead of a more typical RTK-length shot. Perhaps someone more fluent in Trimblese can weigh in on these matters.
> those number seem plausible. Do they meet the standard? I would say in Texas they would (.10+100ppm), even for points two feet apart.
Note that in that particular example, i.e. with standard errors of 0.020 ft. in N and E values of the coordinates of the "as found" and "as replaced" markers, there were 5% of the errors that were greater than about 0.062 ft. One of the assumptions of the problem was that the positioning process was uniform, i.e. that all N and E values had the same uncertainties. In the real world, when surveyors and their employees push GPS into marginal locations, the uncertainties would most likely be significantly larger.
The conclusion that a professional surveyor ought to draw from this example, I think, is that any time you're positioning things about 150 ft. apart or less, it's time to crack out the total station to drastically reduce the random noise in the network RTK positions.
From reading some of the remarks by users of network RTK, it's pretty obvious that many are making the fundamental mistake of considering the deviations from the mean of two repeats to be a measure of positioning quality, forgetting that the mean of two uncertain quantities itself has a significant uncertainty.
Dang, its over there 0.06. Oops, now its north 0.05, shucks. Dang, I can't get there from here.😀
Yeah, that describes RTK staking as well as anything.
I've done extensive testing of RTK and the way I do it is different from what I've seen talked about here. Maybe everyone does the same thing, but I haven't heard that they do.
What we do is set out points randomly without any locations (these points don't need to be very far apart, after all, I already know RTK works great over a mile). Then locate them with the robot, total station, tape, and RTK. I think that is the best comparison available.
All the repeating, coming back the next day and multiple locations are nice, but what is the real distance between two points compared with the inversed distance of two RTK locations.
From doing that I would say Kent's 0.04' is optimistic, but possible. Better than that, not so much.
Everyone should take the time to test their systems and feel like they understand just what limitations that system has.