I am referring to the Network accuracy of the position of the point on the ground obtained with a single observation. I have the same opinion about a single static observation. My experience is there are too many variables that can affect the accuracy to truly determine the estimated accuracy from a single observation. The equipment does a much better job than even ten years ago. I still find differences in positions with multiple observations. Our equipment indicates that we are getting good data, but our results will be different than indicated based on multiple observations. Of course, my experience is in an area with less than optimum conditions due to tree coverage. We have to be very selective to get good results. I am sure my experience would be some what different in wide open spaces.
Example of Relative Positional Precision - RTK
> Gives a mixed message. Thread says RTK, and now you say not RTK
It just means that I didn't do any button-pushing on Shawn's RTK rig to be able to do the analysis and Shawn didn't post the full RTK vectors. I realize that you would like to have some button-pushing done to be real RTK, but the errors as modeled are equivalent to the errors that Shawn got from the button-pushing by which the screen shots he posted were obtained.
Your post raises an issue that most of us no doubt see every day. It is a simple concept that more steps means more opportunity for error to be introduced. We should not however allow that to influence our decision to use the tool when appropriate.
As for ECEF I'm not quite in agreement, in the same sense i know that my EDM doesn't really 'measure' a slope distance. These are simply expressions of other measurements. Understanding and evaluating their usefulness as a tool in a given situation and environment demands some understanding of the science behind them. This is true going back to the highway chains many of us learned to use a while back. The difference is the amount and complexity of mathematical violence imposed to get an answer our clients can use.
Along this path we need to develop procedures that test our expression of the background measurements. For GPS one step in this process is evaluating your data and catching errors. A single baseline RTK solution can be partially evaluated by validated error estimates. You can reduce probable error by the square root of 2 by repeat measurements and catch bad initializations by repeating under different conditions and initialization.
Dual base observations add a new twist to the mix. If done properly you end up with independent expressions of closed loops. The adjustment and validation processes increase dramatically. The limits of our equipment and its expressions of measurements also become clear.
Back to floor prep before I fire the nailer up. I have 22 by 4 feet going down today. I figure I'll allow 3 hours for layout...;-)
Example of Relative Positional Precision - RTK
> No. I do not and never advocated "button pushing" as you call it. You have to stop putting words into others mouths.
> At some point the quibbling and mischaracterizations of others views might end and you'll analyze some actual exported RTK data and I'll be the first to appreciate that.
Apparently, you don't understand how the only obvious conclusion to be drawn from your second statement (and in the same post, no less) is that button pushing will represent reality better than error analysis does. I mean the error analysis of Shawns' RTK data presented above is pretty much a perfect model of the results of his button pushing and clearly shows a serious flaw in the work in the form of an excessively large Relative Positional Precision. What could possibly have been improved about Shawn's work by my own hands being on the button instead of his?
Sometimes you guys crack me up.....chasing millimeters when you have your top heavy receiver 2 meters up in the air on a prism pole that has a 'k-mart' type bubble on it that you haven't checked in 4 weeks, said pole sits in the back of your truck bouncing around like a kid on a trampoline and then, you set-up your 2 meter GPS receiver over a #5 rebar that has no dimple and you are guessing where the 'center' of said rebar is...
I'm very limited in using RTK GPS for boundary work, there are simply too many obstructions for me to do so accurately. When I do I follow my states guidelines for its use.
You have good selective hearing. How about this: you do a static survey using CORS station to tie to NAD83. What do you weight your control when you fix to them during post processing? Zero I hope. Would not then the major-axis of you error ellipses for your new control be their network accuracy? Wouldn't the relative error between those points be local accuracy? This is all common knowledge, but the point I am making is that the same math applies when calculating the error shown on the Javad screenshot, although it is at one sigma and appears over optimistic. Your problem was that relative error could not be lower than station error, and I gave NGS data sheets as proof. The words are different (local accuracy/relative error) (network accuracy/station error) but the processes to get the results are the same. It was simply proof in the math. Sure you could argue that NGS blah blah says that network error only applies to NAD83, and it does, but I guess I wasn't clear enough in explaining I was only comparing the processes, with examples for proof.
> The fact that you're wanting to dispute something that is obviously true just tells me that you are completely unaware of how random errors propagate. See my computation toward the bottom of this thread.
This is beginning to be a serious wast of time because stubbornness cannot be argued. Nice computation. The output for relative accuracy from Javad is at one sigma as is nearly all error estimates given by today's RTK DCs, so it really misses your 95% star net output by .0104'. One could argue that the north aligned ellipses in your calcs could contribute to this extra residual. There are also different interpretations on the proper way to calculate relative error... Just curious, but do you know which one star-net uses?? I agree that 1-sigma error statements on your DC are pretty useless, but it doesn't take a genius to multiply by 2.
Example of Relative Positional Precision - RTK
A new example from today. Longer base-rover vector, longer rover to rover inverse. In 2010 we performed a survey of about 50 acres for a small subdivision. We established control points with static GPS mixed with terrestrial (total station) observations and adjusted all of the data in Columbus least squares software. Today, we did an as-built of a lot in the subdivision and I re-observed two primary control points with 120 second observations using RTK with the base on the same control point "POST" which was held fixed in 2010. The subdivision is about 3 miles North of POST. The inverse distance between the original control points is (all distances are US Survey Feet):
2-3
AZ(LDP Grid): 310°07'16" Horz (LDP Grid) Dist: 2463.4923 Vert (Ortho): -2.5669
from today's RTK observations:
L002-L001
AZ (LDP Grid): 310°07'21" Horz (LDP Grid) Dist: 2463.5409 Vert (Ortho): -2.626
The Javad software computed the relative accuracy of the distance to be 0.0542
NGS G file (Covariance) for the two points L001 and L002 using the same POST base station as the original control (meters by default):
AXX2015 1102015 110G15003
B2015 11017182015 1101718 1JField1.10.3.18IGS 228 1 2 26JAVAD 2015 110IFDDPF
CB001L001 12479610 62 24906921 111 40538492 84 J 105A0001J 105A0001
G 0 L001 L001 WGS84 -4530252425 -53708638932 33988839877
E 1 2 666 1 3 -549 2 3 -5638
B2015 11017 92015 11017 9 1JField1.10.3.18IGS 228 1 2 26JAVAD 2015 110IFDDPF
CB001L002 17982390 43 21833519 71 36457121 75 J 105A0001J 105A0002
G 0 L002 L002 WGS84 -4524749645 -53711712334 33984758506
E 1 2 262 1 3 45 2 3 600
Screen Captures from Javad RTK:
Original Adjustment results from 2010 (US Survey Feet):
POST
F Latitude N 32-21-57.50398 0.000000000
F Longitude W 94-49-56.60028 0.000000000 0.000000000
F Ortho Hgt 402.97900 0.000000000 0.000000000 0.000000000
3
Latitude N 32-24-33.61484 0.000006438
Longitude W 94-49-17.04142 0.000000127 0.000006359
Ortho Hgt 360.25631 -0.000000002 -0.000000002 0.000010955
2
Latitude N 32-24-17.90561 0.000006350
Longitude W 94-48-55.07014 0.000000174 0.000005899
Ortho Hgt 362.82287 -0.000000002 -0.000000004 0.000010445
> Sometimes you guys crack me up.....chasing millimeters when you have your top heavy receiver 2 meters up in the air on a prism pole that has a 'k-mart' type bubble on it that you haven't checked in 4 weeks, said pole sits in the back of your truck bouncing around like a kid on a trampoline and then, you set-up your 2 meter GPS receiver over a #5 rebar that has no dimple and you are guessing where the 'center' of said rebar is...
I hear arguments like this quite a bit. It's worth repeating that good procedure and maintaining equipment in good working order is absolutely critical to producing good results.
What I don't like about the argument is that for many it seems to be an endorsement to apathetic work. "We're all using substandard equipment in poor working order, why chase hundredths?" I reject the hypothesis "we're all using substandard equipment in poor working order", so I have to reject the conclusion "pursuing hundredths is vanity".
Example of Relative Positional Precision - RTK
> A new example from today.
Yes, this is yet another example that shows how screwed up the relative accuracy figure on the screenshot is. By simple calculation, using just the processor-estimated numbers of unknown quality, the Relative Positional Precision between L001 and L002 is going to be SQRT[0.050^2 + 0.062^2] = about 0.08 ft., not 0.054 ft. as displayed.
The missing piece, however, is how much the 95%-confidence error ellipse needs to be scaled in order to make it realistic, which of course your data doesn't disclose. When the covariances of L001 and L002 are correctly scaled, the RPP between them could easily be 0.15 ft. or more.
Example of Relative Pos. Prec. - RTK (not RTK, maybe)
> Actually, that isn't RTK data.
Well, how complicated is the idea that if you know what the uncertainties of a measuring process are, you can model it without actually taking any measurements yourself?
Shawn provided the model data for his two points. Those represent the *performance* of his RTK system, but are not actually RTK data as I understand the term.
An excellent analogy would be to describe the performance of an EDM in terms of the standard errors of ranges measured with it. Without actually measuring anything, one can derive very good models of what different combinations of measurements with those standard errors will have in the way of accuracy. This is, or should be, well short of Rocket Science.
In the case of Shawn's RTK results, the problem of describing the Relative Positional Precision of the two points he did, would be quite similar to any pair of points positioned at similar distances from the base and under similar conditions. In other words, the poor results he got would be typical.
Example of Relative Positional Precision - RTK
> Yes, this is yet another example that shows how screwed up the relative accuracy figure is. By simple calculation, the Relative Positional Precision between L001 and L002 is going to be SQRT[0.050^2 + 0.062^2] = about 0.08 ft., not 0.054 ft. as displayed.
>
Surprisingly this "issue" hasn't been resolved since we discussed it last night. It is what it is and the reported relative accuracy is not without benefit. I agree that the addition of the relative positional accuracy needs to be included as well.
> The missing piece, however, is how much the 95%-confidence error ellipse needs to be scaled in order to make it realistic, which of course this data doesn't disclose. When the covariances of L001 and L002 are correctly scaled, the RPP between them could easily be 0.15 ft. or more.
What leads you to think there needs to be any scaling of the 95% error ellipse?
The residuals of 3/L001 are:
N 0.052
E -0.048
U -0.051
The minor axis of the error ellipse was nearly North and reported to be 0.047 (vs. 0.052 residual). The major axis of the error ellipse was nearly East and reported to be 0.050 (vs. -0.048 residual) and the elevation axis was reportedly 0.081 (vs. -0.051 residual). These all seem very realistic to me.
The residuals of 2/L002 are:
N -0.022
E -0.047
U +0.009
The major axis of the error ellipse was nearly North and reported to be 0.062 (vs. 0.022 residual). The minor axis of the error ellipse was nearly East and reported to be 0.034 (vs. -0.047 residual) and the elevation axis was reportedly 0.044 (vs. +0.009 residual). Again, these all seem very realistic to me.
Example of Relative Positional Precision - RTK
> What leads you to think there needs to be any scaling of the 95% error ellipse?
Well, the processor-estimated variances-covariances are the basis from which the 95%-confidence error ellipse was calculated. Based upon your information, those covariances were calculated from the sort of short series of observations that nearly invaribly produces optimistic results, i.e. smaller variances than are realistic.
If a surveyor simply uses the processor estimates right out of the box, it makes the RTK vectors look more accurate than they really were. The practical way to deal with this problem is to apply a scalar or "reality factor" to the variances. That has the effect of enlarging the axes of the error ellipses calculated from the variance-covariance data.
This is the entire point of actually doing controlled testing on GPS gear, including RTK: to get a handle upon the appropriate range within which the scalar or reality factor should fall.
Example of Relative Pos. Prec. - RTK (not RTK, maybe)
The same effect would occur with a total station radial survey. Close in the total station would give much better results. At say a half mile the total station's error ellipses would start to grow because the angle measured would start to blur. I'd say that at a mile out with a total station it would be significantly worse (the uncertainty would be much bigger) than the RTK because at a mile or more out the RTK uncertainty would be near the same as close in.
So its just using the tools that give you what you need. If the project is small and needs to be real tight RTK may not be the right tool. I'm perfectly fine with the the real uncertainty of RTK for most of the work that I do.
I think this whole thread shows one of the reasons RTK may not be as good as the specs say. The specs are probably somewhat realistic for the base to the rover. But, when you combine the uncertainties of two radial RTK observations the total is about double the RTK spec. No real surprise but maybe something not really brightly showing without some thought.
Example of Relative Positional Precision - RTK
> > What leads you to think there needs to be any scaling of the 95% error ellipse?
>
> Well, the processor-estimated variances-covariances are the basis from which the 95%-confidence error ellipse was calculated. Based upon your information, those covariances were calculated from the sort of short series of observations that nearly invaribly produces optimistic results, i.e. smaller variances than are realistic.
>
> If a surveyor simply uses the processor estimates right out of the box, it makes the RTK vectors look more accurate than they really were. The practical way to deal with this problem is to apply a scalar or "reality factor" to the variances. That has the effect of enlarging the axes of the error ellipses calculated from the variance-covariance data.
>
> This is the entire point of actually doing controlled testing on GPS gear, including RTK: to get a handle upon the appropriate range within which the scalar or reality factor should fall.
I understand that. But what about this data suggests that it is optimistic. It seems to have fairly accurately predicted how well the coordinates would agree with real points. (This also assumes that the original control was without error.)
" It's worth repeating that good procedure and maintaining equipment in good working order is absolutely critical to producing good results."
Hmmm... I noticed you said 'good' results. Of course, we are always trying to achieve satisfactory results. But I think sometimes surveyors place so much emphasis on their 'good' results that the fail to take into account 'real world' conditions and actually believe their results are more accurate than they actually are.
Besides I never said anything about substandard equipment in poor working order, or poor field procedures.
I definitely agree, we need to keep our equipment in good working order. But the equipment we have is not perfect, it has limitations. Let's not pretend it is. You are not surveying in a vacuum, where environmental factors don't exist. Just because the L.S.A you performed tells you that you within 0.004' doesn't mean you are. Do we do adjustments and survey properly? By all means! If you need to chase those few hundredths in order to accomplish the task, then do it. Although I doubt in most cases you actually are achieving it (especially over long distances).
As a wise man said...'don't strain out the gnat and swallow the camel.'
Example of Relative Positional Precision - RTK
> I understand that. But what about this data suggests that it is optimistic.
The data itself says nothing either way. You're comparing it to positions that don't have uncertainties stated and all you have are the processor estimates of your new survey.
So what is left is the fact that the processor estimate was based upon the very short data set that typically produces an optimistic result. That's something and something pretty much always beats nothing when it's time to make a judgment.
Example of Relative Pos. Prec. - RTK (not RTK, maybe)
> The same effect would occur with a total station radial survey. Close in the total station would give much better results. At say a half mile the total station's error ellipses would start to grow because the angle measured would start to blur. I'd say that at a mile out with a total station it would be significantly worse (the uncertainty would be much bigger) than the RTK because at a mile or more out the RTK uncertainty would be near the same as close in.
>
> So its just using the tools that give you what you need. If the project is small and needs to be real tight RTK may not be the right tool. I'm perfectly fine with the the real uncertainty of RTK for most of the work that I do.
>
> I think this whole thread shows one of the reasons RTK may not be as good as the specs say. The specs are probably somewhat realistic for the base to the rover. But, when you combine the uncertainties of two radial RTK observations the total is about double the RTK spec. No real surprise but maybe something not really brightly showing without some thought.
Amen to the radial survey... and a correction on the base/rover idea of combined error. Remember that they function in unison, based on common satellites and correction via radio. The rover's solution is based on the correction from the base.
Shawn, I think this thread and topic will be invaluable if you are able to help the Javad software folks implement some of the suggestions Kent has made. Despite his attitude, you have managed to squeak some useful suggestions out of him. Bravo! I don't think I would have had the patience for that.
Example of Relative Positional Precision - RTK
I am not so sure about scaling the variances, The covariance matrix is what results after scaling the statistics from the LSA by the a posteriori reference variance. correct? Having a short observation period will certainly impact the statistics but the redundancy of the adjustment, (ie short or longer observations) is already accounted for in the covariance matrix if it is properly built...please explain...
it is a misconception, but its not bizarre, in fact it is common.