Javad integrates ma...
 
Notifications
Clear all

Javad integrates magnetic locator into the rover pole

162 Posts
32 Users
0 Reactions
8 Views
(@kent-mcmillan)
Posts: 11419
 

Jim Frame, post: 383342, member: 10 wrote: I believe this is true, and that static will produce a more accurate result (though often not by much) most of the time. But "fast" in this case can mean more than just observation time on a point; it can also mean getting results in real time that dramatically speed up the process of finding the points one wants to position. The latter -- being able to find boundary marks in real time, rather than having to post-process control point data and then return to the site to search for monuments -- is the most valuable aspect of RTK to me.

Yes, but isn't decimeter accuracy more than adequate to recover monuments? I've found that it nearly always is on sites where GNSS is feasible in the first place. And if all you really need to find stuff is decimeter accuracy, aren't there other options short of centimeter-level RTK?

On a site where you are necessarily going to have to mix conventional observations with GPS vectors anyway just to tie in what needs to be tied and to avoid the sort of ugly relative local accuracy that RTK produces, I'd think it makes much more sense to forget about RTK and use PPK and Static, particularly when you consider that you'd have to repeat one RTK vector more than six times just to get an uncertainty as low as one Fast Static vector would yield.

I realize that the folks who just use their RTK rig as a number box won't have any qualms about staking out from RTK-generated control and then fudging the results if they ever discover where the markers set actually ended up, but for permanent boundary markers, the best practice has to continue to be setting out from adjusted control.

It's true that careful surveying isn't nearly as fast as can be done with RTK, particularly now that Locate-a-Ping technology is on the market.

 
Posted : July 29, 2016 8:55 pm
(@jim-frame)
Posts: 7277
 

Kent McMillan, post: 383344, member: 3 wrote: Yes, but isn't decimeter accuracy more than adequate to recover monuments?

Decimeter would be better than handheld, certainly, but centimeter is really nice, especially with network corrections so I don't have to mess with setting up and securing a base station -- just get out of the truck and start navigating to my target. Once the target is found, I can record as much or a little data as I want, either for an RTK or post-processed solution.

 
Posted : July 29, 2016 9:46 pm
(@duane-frymire)
Posts: 1924
 

Kent McMillan, post: 383273, member: 3 wrote: That is exactly what you'd get out of adding conventional measurements between points located via RTK and adjusting them in combination. That is, the uncertainties of the conventional angle and distance measurements should be well characterized in any situation where they would be made, so large residuals in the RTK positions as adjusted identify unrealistic weights. For the purposes of rigorous treatment, you could divide the RTK vectors into variance groups, to separate those generated under unquestionably good conditions from those that aren't. If it turns out that if the same scalar can be applied to the processor estimates for both groups, then you'd have a basis for concluding that the additional effort to separate them isn't necessary and all you're really interested in is trapping blunders.

Yes, and under ALTA contracts 10-15 years ago you could do this once or twice and then use similar procedures on subsequent projects. Now it appears you would need to measure all points with conventional methods on all projects and perform this analysis. Which of course means using rtk would only add to the work with no logical reason to use it at all on an ALTA project.

But I wouldn't be surprised to see this change. If the developers of rtk can provide testing showing realistic error statements, and that their blunder detection works, they could get it approved under the next ALTA revision (maybe).

I'll take this opportunity to challenge JAVAD and his people to be the first to get approved use under the next ALTA revision. I know he likes a challenge.

 
Posted : July 30, 2016 3:17 am
(@kent-mcmillan)
Posts: 11419
 

Duane Frymire, post: 383353, member: 110 wrote: I'll take this opportunity to challenge JAVAD and his people to be the first to get approved use under the next ALTA revision. I know he likes a challenge.

Yes, and have the ALTA standards rewritten to require Locate-a-Ping methods, too. :>

 
Posted : July 30, 2016 4:40 am
(@kent-mcmillan)
Posts: 11419
 

Jim Frame, post: 383345, member: 10 wrote: Decimeter would be better than handheld, certainly, but centimeter is really nice, especially with network corrections so I don't have to mess with setting up and securing a base station -- just get out of the truck and start navigating to my target. Once the target is found, I can record as much or a little data as I want, either for an RTK or post-processed solution.

I'd consider you to be one of the few sophisticated RTK users who post here, so I'll ask. How realistic have you found the processor-estimated uncertainties of the NetworkRTK positions to be? On a related topic, how do you demonstrate that your surveys that rely upon RTK-derived positions comply with relative positional accuracy standards such as that of the ALTA/NSPS spec?

 
Posted : July 30, 2016 4:45 am
(@kent-mcmillan)
Posts: 11419
 

Duane Frymire, post: 383353, member: 110 wrote: Yes, and under ALTA contracts 10-15 years ago you could do this once or twice and then use similar procedures on subsequent projects. Now it appears you would need to measure all points with conventional methods on all projects and perform this analysis.

That simply isn't true, though. Error analysis can be done productively via least squares adjustment software such as Star*Net even on surveys without high degrees of redundancy if the weights are realistic. Angles and distances measured with a total station tend to be highly consistent in their uncertainties and their standard errors, once derived, can be used from project to project without surprises. GPS vectors are similar in that processor estimates of vector uncertainties of vectors of similar classes tend to differ systematically from realistic values and the scalar corrections tend not to vary much from project to project.

Since it is usually a given that conventional measurements will need to be made in the course of a survey, conventional measurements between points positioned via GPS vectors will be available for no additional effort to add to the adjustment and use to test the GPS vectors.

 
Posted : July 30, 2016 4:57 am
(@jim-frame)
Posts: 7277
 

Kent McMillan, post: 383356, member: 3 wrote: How realistic have you found the processor-estimated uncertainties of the NetworkRTK positions to be?

I haven't done any work that relies solely on network (or other) RTK for positioning boundary control. But I just looked at an ALTA I did that included network RTK ties to 8 such points, and the horizontal RTK vector component residuals are all 0.03' or less (mostly much less). (The vertical residuals were larger, more in the 0.05' range.)

On that particular survey, I was also able to make good use of RTK to locate some improvements that lay on the other side of a densely-vegetated fenceline in a materials storage yard that had very poor sight line conditions. Locating those features via total station would have required hours of traversing, but even with redundant observations I was able to get everything with RTK in about half an hour.

 
Posted : July 30, 2016 6:06 am
(@kent-mcmillan)
Posts: 11419
 

Jim Frame, post: 383364, member: 10 wrote: On that particular survey, I was also able to make good use of RTK to locate some improvements that lay on the other side of densely-vegetated fenceline in a materials storage yard that had very poor sight line conditions. Locating those features via total station would have required hours of traversing, but even with redundant observations I was able to get everything with RTK in about half an hour.

Yes, I certainly don't doubt that GPS is a good mapping tool, but was wondering specifically about how realistic you found processor-estimated uncertainties in RTK vectors to be. It sounds as if you were mostly using Network RTK solutions. Were the observations that you used in the form of vectors from some point or use coordinates with uncertainties? Can you tell from the adjustment statistics whether the error factors generated for the vectors and/or coordinates indicated realism, or is the number of vectors/coordinates in that project simply too small to evaluate meaningfully?

 
Posted : July 30, 2016 6:16 am
(@jim-frame)
Posts: 7277
 

Were the observations that you used in the form of vectors from some point

Yes, vectors from the network base, which was about 3 km distant.

Can you tell from the adjustment statistics whether the error factors generated for the vectors and/or coordinates indicated realism

I applied a scalar of 1.5 to the vector error estimates in my adjustment, which is consistent with similar work done on other projects with this system. The in-the-field error estimates were all about 0.03' horizontal.

I certainly don't doubt that GPS is a good mapping tool

FWIW, this application was more demanding than just mapping. I used the RTK-only positions to dimension the distances from the offsite improvements to the property line, and was comfortable doing so to the nearest 0.1', which is what I normally do for ALTA surveys.

 
Posted : July 30, 2016 7:12 am
(@mark-mayer)
Posts: 3363
Registered
 

Kent McMillan, post: 383365, member: 3 wrote: ...wondering specifically about how realistic you found processor-estimated uncertainties in RTK vectors to be...

Lately I'm using RTK vectors in my StarNet adjustments exclusively and it is rarely necessary to scale the errors by anything other than a factor of 1. Occasionally I double the vertical error component.

 
Posted : July 30, 2016 7:31 am
(@kent-mcmillan)
Posts: 11419
 

Mark Mayer, post: 383372, member: 424 wrote: Lately I'm using RTK vectors in my StarNet adjustments exclusively and it is rarely necessary to scale the errors by anything other than a factor of 1. Occasionally I double the vertical error component.

What are you adjusting the RTK vectors in combination with? Just other RTK vectors or some higher-accuracy measurements?

 
Posted : July 30, 2016 7:58 am
(@kent-mcmillan)
Posts: 11419
 

Jim Frame, post: 383364, member: 10 wrote: I haven't done any work that relies solely on network (or other) RTK for positioning boundary control. But I just looked at an ALTA I did that included network RTK ties to 8 such points, and the horizontal RTK vector component residuals are all 0.03' or less (mostly much less). (The vertical residuals were larger, more in the 0.05' range.)

On that particular survey, I was also able to make good use of RTK to locate some improvements that lay on the other side of a densely-vegetated fenceline in a materials storage yard that had very poor sight line conditions. Locating those features via total station would have required hours of traversing, but even with redundant observations I was able to get everything with RTK in about half an hour.

To be clear, when you wrote that you "haven't done any work that relies solely on network (or other) RTK for positioning boundary control", does that mean that all RTK-derived positions from which the boundary was determined, either to primary or secondary control, were also connected by conventional measurements?

 
Posted : July 30, 2016 8:02 am
(@norman-oklahoma)
Posts: 7610
Registered
 

Kent McMillan, post: 383378, member: 3 wrote: What are you adjusting the RTK vectors in combination with? Just other RTK vectors or some higher-accuracy measurements?

Total Station measurements.

 
Posted : July 30, 2016 8:47 am
(@kent-mcmillan)
Posts: 11419
 

Norman Oklahoma, post: 383386, member: 9981 wrote: Total Station measurements.

That certainly would be the best way of validating RTK-derived positions over relatively small areas in urban settings, particularly if the standard errors of target and instrument centering and of the angles and distances are well determined.

Even in rural settings, conventional measurements between sets of GPS-positioned points can be a very useful and efficient test.

 
Posted : July 30, 2016 8:53 am
(@norman-oklahoma)
Posts: 7610
Registered
 

Kent McMillan, post: 383392, member: 3 wrote: That certainly would be the best way of validating RTK-derived positions over relatively small areas

It would be stretching the truth to claim that every single position determined by an RTK vector gets redundantly measured with the TS. Sometimes a remote position gets located by RTK alone. When it does a redundant vector, time separated, is appropriate. Usually in the PNW I am using GPS for the purpose of tying the control to State Plane and/or to provide a measure of closure to a linear traverse. The bulk of the work is TS. In Oklahoma, of course, GPS vectors were the rule and the TS measurements the exception. But the principle remains.

With modern RTK, when used properly, the quality of the vectors are very close to what can be achieved with static.

 
Posted : July 30, 2016 9:17 am
(@jim-frame)
Posts: 7277
 

Kent McMillan, post: 383379, member: 3 wrote: To be clear, when you wrote that you "haven't done any work that relies solely on network (or other) RTK for positioning boundary control", does that mean that all RTK-derived positions from which the boundary was determined, either to primary or secondary control, were also connected by conventional measurements?

Yes. Not that I won't at some point rely only on (redundant) RTK for boundary control, just that I'm still getting comfortable with the technology.

 
Posted : July 30, 2016 9:38 am
(@kent-mcmillan)
Posts: 11419
 

Norman Oklahoma, post: 383398, member: 9981 wrote: With modern RTK, when used properly, the quality of the vectors are very close to what can be achieved with static.

I suppose it depends upon how one defines "very close". The working definition I use is whether or not the results meet some common positional uncertainty specification and are highly reliable.

This Trimble spec sheet for their GNSS R8, for example, quotes the following uncertainties for that receiver:

Static GNSS Surveying
3mm + 0.1ppm RMS Horizontal
3.5mm + 0.4ppm RMS Horizontal

Static and FastStatic
3mm + 0.5ppm RMS Horizontal
5mm + 0.5ppm RMS Vertical

PPK and RTK (Single baseline <30km)
8mm + 1ppm RMS Horizontal
15mm + 1ppm RMS Vertical

In other words, on a 5km line, it would take about six repeats of an RTK vector to get a result with an uncertainty as low as can be had with one Static or FastStatic vector.

http://trl.trimble.com/docushare/dsweb/Get/Document-140079/022543-079N_TrimbleR8GNSS_DS_1014_LR.pdf

 
Posted : July 30, 2016 9:58 am
(@mark-mayer)
Posts: 3363
Registered
 

Kent McMillan, post: 383337, member: 3 wrote: ..in best Perry Mason "got'cha" voice..

I'm a big fan of Perry Mason. I've got all nine seasons on DVD.

Like all courtroom lawyers Perry loved it when a witness would make an absolute statement like "my measurements are correct within the stated limits because it passed the 95% confidence" because it so easy to discredit the statement, and with it the witnesses whole testimony. Better to say that the 95% confidence further implies that 99% are within half again the error stated, 99.9% are within double, and there may possibly be an outlier somewhere in the world.

 
Posted : July 30, 2016 10:29 am
(@kent-mcmillan)
Posts: 11419
 

Mark Mayer, post: 383408, member: 424 wrote: I'm a big fan of Perry Mason. I've got all nine seasons on DVD.

Like all courtroom lawyers Perry loved it when a witness would make an absolute statement like "my measurements are correct within the stated limits because it passed the 95% confidence" because it so easy to discredit the statement, and with it the witnesses whole testimony. Better to say that the 95% confidence further implies that 99% are within half again the error stated, 99.9% are within double, and there may possibly be an outlier somewhere in the world.

In the real world, though, when you're testifying to a Texas jury, you explain it much more simply. If pressed, you'd say something like "I tested my results using standard methods used by professional surveyors and found nothing about my survey that would lead me to think it was other than correct and reliable for practical purposes."

In the example I gave, we had just finished measuring pavement widths on a georeferenced pdf made from an old orthophoto and opposing counsel couldn't believe that the average width I'd derived was, unknown to me, pretty much exactly what another surveyor had previously determined by a survey on the ground. When I told the jury what the average width of the pavement feature that appeared in the old aerials was, it was "a little over 23 feet", or something along those lines. The opposing side was claiming 12 feet.

 
Posted : July 30, 2016 11:45 am
(@mark-mayer)
Posts: 3363
Registered
 

Kent McMillan, post: 383405, member: 3 wrote: This Trimble spec sheet for their GNSS R8, for example, quotes the following uncertainties for that receiver:

PPK and RTK (Single baseline <30km)
8mm + 1ppm RMS Horizontal
15mm + 1ppm RMS Vertical

I expect that the quoted spec for RTK is for a single epoch. My experience with average results for 15, 30, 60 ,90, 180 second occupations says that
[INDENT]a) the error in an RTK vector is highly dependent on PDOP.
b) the error in an RTK vector drops significantly as the occupation time increases up to about 90 seconds (assuming a PDOP of around 2, or better). After that the "yield curve" flattens out.
c) with good PDOP and 90 seconds occupation the target centering becomes the more significant source of error in the system.
[/INDENT]
I'm routinely seeing residuals from RTK vectors, generated by a 12 year old Hiper Lite+ system and Survey Pro, on the order of a hundreth.

These days, with GLONASS and moderately good sky, etc., it isn't uncommon to get PDOP of 2 or better. People make the mistake of trying to compensate for higher PDOP with longer occupations, and it just doesn't work that way. That's were people get into trouble with RTK. In cases were the PDOP is much above 2.5, or so, there will be significant advantages in using static methods. Up to a point. We all know there are places were GNSS just shouldn't be used.

 
Posted : July 30, 2016 12:48 pm
Page 8 / 9