Notifications
Clear all

Repeatability of a Survey with RTK

47 Posts
18 Users
0 Reactions
2 Views
(@kris-morgan)
Posts: 3876
 

Gavin

No, Matthew is correct. It's a slough on near the Sabine River. That stuff that is in there, I can't identify, other than to say, it burns your skin.

Horrible job, great memories though. 🙂

 
Posted : 23/08/2012 6:47 am
(@kris-morgan)
Posts: 3876
 

Kris

:good:

 
Posted : 23/08/2012 6:47 am
(@kent-mcmillan)
Posts: 11419
Topic starter
 

Just to clarify the problem, the situation is that on one day, using some positioning technology that gives coordinates with standard errors of 0.02 ft. (say) in N and in E values, the system used displays the coordinates for a boundary marker that you find (or originally set). Then, on some later day after marker and all traces of it have been destroyed, you set a new marker at a position that the same positioning system says has the identical coordinates, values that also have standard errors of 0.02 ft. in N and in E.

So, just using the coordinates displayed by the positioning system, you would think that the replaced marker is exactly where the marker originally was, right? Or you recognize that there is some random error in the replacement, but still the most probable position of the replaced marker is exactly where the marker originally was.

The mathematic situation is this:

1. Call the original markers (1) and the replaced marker (2). The coordinates displayed for (1) are N(1), E(1), but since both contain some unknown errors n(1) and e(1), the actual position of the original marker is N(1)+ n(1), E(1)+ e(1).

2. Call the replaced marker (2). The coordinates displayed for (2) when it was set with the utmost care are exactly the same as those of (1): N(1), E(1). However those coordinates also contain unkonwn errors n(2) and e(2). So the actual position of the replaced marker is N(1)+ n(2), E(1)+ e(2).

3. The North and East components of the difference in the actual positions of original and replaced markers is n(1)- n(2) for Northing and e(1)- e(2) for Easting.

4. Since n(1) and n(2) both have standard errors of +/-0.02 ft., Assuming that they are random and are normally distributed that means that the quantity n(1)- n(2) is also normally distributed, but with a standard error of +/- SQRT(2) x 0.02 = +/-0.028 ft.

5. Likewise for the quantity e(1)- e(2). It is normally distributed with a standard error of +/-0.028 ft.

6. If the error in the actual replaced position of (2) is D, the distance from the actual position of the replaced marker to the actual position of the original marker, then D = SQRT{[ n(1)- n(2)]^2 + [ e(1)+ e(2) ]^2 }

7. The actual values of the random errors are not known, but as derived above, the standard errors of the n and e differences are +/-0.028 ft.

8. However, D, the error in the actual replaced position of (2) propagated from the random errors in the n and e differences, is not normally distributed. That is where the answer to the question lies. Intuition that said the most likely error was zero has failed. Zero is actually a very, very unlikely error. Interestingly, there is a larger number that is the error most likely to occur.

 
Posted : 23/08/2012 7:04 am
(@kent-mcmillan)
Posts: 11419
Topic starter
 

> Our normal check to existing GPS points is 0.03'. However, you're looking for the stats, so if you're saying that the standard error is 0.02', then that's one sigma, so empirically, (0.02'x2)x2=0.08' for a 95% confidence.

Kris, the question is somewhat different. There actually is an error in this situation that is the most probable error and it won't be zero. The implications are that if a surveyor proceeds as described he or she is saying that they definitely expect to miss the original corner by a certain amount and consider it to be quite unlikely that they will actually hit the corner with the replaced marker. This would be an inherent characteristic of network RTK, for example, if used as described in this problem.

 
Posted : 23/08/2012 7:47 am
(@kris-morgan)
Posts: 3876
 

Kent

You should google the "Delphi Method" or analyzing data. It seems you're missing a critical component for this method.

🙂

 
Posted : 23/08/2012 7:53 am
(@kent-mcmillan)
Posts: 11419
Topic starter
 

Kent

> You should google the "Delphi Method" or analyzing data. It seems you're missing a critical component for this method.

Actually, this is a fairly clean statistical exercise that gives a result that is quite realistic, even if many surveyors don't expect it or want to believe otherwise. I'll give everyone a chance to work the problem. There's no hurry.

 
Posted : 23/08/2012 8:09 am
(@ashton)
Posts: 562
Honorable Member Registered
 

When ability to remember the right words to look up in the index of a statistics book fail, there is always virtual experimentation.

Using Excel, define two random variables, u and v, where u is the distance in the x direction from the lost point to the reset point, which is negative if the reset point is west of the lost point. The definition of v is similar, except in the north direction. The standard deviation of u and v is about 0.282 as Kent derived. So I just generated 200 sets of u and v, found the distance from (0, 0) to (u, v) for each trial, and plotted a histogram of the results. The histogram was indeed non-normal. Empirically, the value for which 100 values were greater and 100 values were less was 0.033 ft.

I believe a more realistic model of a process such as RTK might be a distance r, which has a standard deviation of, for example, 0.02 ft, and a uniformly distributed random direction. I believe this would give slightly different results.

 
Posted : 23/08/2012 9:54 am
(@kris-morgan)
Posts: 3876
 

Kent

So you're not going to look up alternative methods to quantify data? Seems odd for you, but okay brother! 🙂

 
Posted : 23/08/2012 9:58 am
(@kent-mcmillan)
Posts: 11419
Topic starter
 

>The standard deviation of u and v is about 0.282 as Kent derived. So I just generated 200 sets of u and v, found the distance from (0, 0) to (u, v) for each trial, and plotted a histogram of the results. The histogram was indeed non-normal. Empirically, the value for which 100 values were greater and 100 values were less was 0.033 ft.

If you generated the trials correctly, the answer to the problem is the bin of the histogram that contains the most members (assuming that all of the bins are the same interval). That is the maximum probability error, the expected error.

> I believe a more realistic model of a process such as RTK might be a distance r, which has a standard deviation of, for example, 0.02 ft, and a uniformly distributed random direction.

The usual way of modeling this would be as standard deviations of the x and y components with their covariances. I assumed for the purposes of this that the N and E values that the positioning system returned were not highly correlated (a generous assumption).

 
Posted : 23/08/2012 10:47 am
(@kent-mcmillan)
Posts: 11419
Topic starter
 

Kent

> So you're not going to look up alternative methods to quantify data?

The so-called "Delphi Method" when applied to subjects like this would be reverting to a faith-based approach that no professional land surveyor would want to be standing near when it blows up.

The problem I posed is a fairly straight-forward statistics problem, so any "expert" who comes up with a different answer probably isn't an expert. :>

 
Posted : 23/08/2012 10:54 am
(@kris-morgan)
Posts: 3876
 

Kent

Actually, no. Did you look it up? Evidently not.

This method is widely used by folks with more brains that you and me and much more money on making huge decisions when evaluating data.

 
Posted : 23/08/2012 10:57 am
(@kent-mcmillan)
Posts: 11419
Topic starter
 

Kent

> This method is widely used by folks with more brains that you and me and much more money on making huge decisions when evaluating data.

Kris, of course I looked it up. It obviously has zero application to this problem.

 
Posted : 23/08/2012 11:07 am
(@kris-morgan)
Posts: 3876
 

Kent

I totally disagree. The premise behind the Delphi Method is, an application of quantifiable data in conjunction with qualifiable data, for estimating and decision making.

To bring it around to this subject, no one disagrees with your mathematical approach to the issue. However, it's quite difficult to argue with results. So, then either one of two things has happened. Either (A), you did the math wrong (which I highly doubt) or (B) there are more things at play here than your mathematical model can illustrate.

I've said it many times, our RTK gear gets 0.03' reliability most of the time. It's true we "find" (as you said) bad answers, but checking is necessary, as I'm sure you agree.

As always with us surveyors, the proof is in the dirt. So, I'm quite capable of sitting here and telling you, your math is right, and your conclusions are wrong. Why, I've witnessed it and tested it SO many times, it's really just not worth mentioning. I have ZERO problems setting pairs, traversing, and closing into pairs, without having them least square adjusted into some model. They are good enough (most of the time) to stand on their own.

Am I advocating 1 shot RTK ideas, of course not. Because I know your math is right, I'm advocating not throwing the gear away because it's not static and used in Star Net, but that it should be used with care and checking. I've run between enough pairs and checked enough to say, it's so close, you won't be able to tell if it's the gun, the instrument operator, or the GPS points, when done correctly.

That's the qualitative side of it. It's not faith based at all, in fact, it's a quantifiable experience from using the gear.

I believe you said it best, and I may be paraphrasing here "You don't need a degree in metallurgy to use a hammer" (I have to admit, I'm quite fond of that saying and have used it many many times, so I hope you don't mind adding it to my vernacular). And as you've stated OVER AND OVER, you don't use RTK gear. I think you'll find, WHEN you bite off into it, that you've been missing a huge productivity multiplier and can really work it into your everyday work (most of the time).

It's been fun. I enjoy finally getting you to say something. 🙂 Trust me, it wouldn't be so popular, if only your math was at play and correct. No one would use it.

 
Posted : 23/08/2012 11:22 am
(@kent-mcmillan)
Posts: 11419
Topic starter
 

Kent

> However, it's quite difficult to argue with results. So, then either one of two things has happened. Either (A), you did the math wrong (which I highly doubt) or (B) there are more things at play here than your mathematical model can illustrate.

Kris, the problem is in fact a purely statistical one. Are you saying that you don't think that random RTK errors are normally distributed? What is the basis for that assertion, if so?

 
Posted : 23/08/2012 11:47 am
(@kris-morgan)
Posts: 3876
 

Kent

Now where did I say that? Nowhere. No, what I said was, the Delphi method incorporates both quantifiable (numbers) and qualifiable (experience) into the mix. Refusing to look at one, well that's a little myopic.

You gotta give the 5700 a spin with the RTK turned on brother! You're gonna love it!

 
Posted : 23/08/2012 11:59 am
Page 2 / 4
Share: