Under Kent McMillan's mega-thread titled "Simple Question for RTK Users/Enthusers", begun on 9/19/2014 at 12:34, several posters made the point that adjusting non-redundant measurements (i.e. spur or radial observations) in a LSQ program is meaningless. Kent replied that it is a good error estimation tool, or in other words to present the uncertainties in the computed position based on the apriori statistics from the processing software. I agree with Kent.
I have another reason to do it...computational confidence. I use Geolab, which, I believe, does all of the computations in ECEF. So, if I put in a bunch of non redundant measurements and fix a point (or more than one), then I know that the resulting coordinates have been correctly computed, no need to worry about grid factors, etc. I can then convert the results to lat/long, or SPC or UTM and be assured that everything was done correctly. Also, it interpolates and applies the geoid separation where needed. And it also calculates deflections of the vertical (from the geoid model) and applies that if required.
This applies to both conventional and GNSS observations. It is also a great tool to mix observation types (no longer non-redundant). So everything I do, conventional and GNSS (except topo and hydro) shots gets cranked through Geolab whether there is any redundancy on a particular observation or not. If I properly weight it I can get a good idea of the accuracy of my results. Of course, it is always great to have redundancy, whether multiple shots or different methods (i.e. GPS static, RTK, total station and leveling).
When doing a dam deformation survey, we try to shoot every point (usually around 100 or more) from at least two setups. However, there is usually one or two points that only got shot once, most likely due to location or objects blocking it. But, I have good confidence that the error ellipse for these points is just as good as for the rest of the stations that were shot twice, because similar methods were used. Of course, the point shot once usually has a slightly larger error ellipse, but that is to be expected without the redundant measurement(s).
John and Kent are both spot-on with regard to most points made, but we need to agree that least squares adjustment and error propagation are two different things. While both may be realized from the same computational package it is a mistake to say or imply that one can use least squares adjustment on non-redundant data. Details matter.
For example within the past two weeks, a medical flight airplane took off from the Las Cruces airport bound for Arizona. Regretfully the plane's tanks were mistakenly topped-off with jet fuel instead of high-octance gasoline. The plane crashed soon after take-off killing all four persons on board.
True, there is no adjustment being done to non-redundant measurements. But, my point is that the LSA software is correctly reducing the data. In the old days using my trusty HP 41CX calculator and printer (before PC's), I would run a traverse through once using raw distances. Then I would use the preliminary coordinates to compute scale and elevation factors, and then run it through again with the corrected grid distances.
This is the reason I reduce all observed EDM distances and zenith angles to mark to mark values. That is what is used in the ECEF world. If you use horizontal distances, the question is at what height? standpoint elevation? forepoint elevation? somewhere in between? Is the geoid separation being applied? How about radius of curvature? Similarly, slope distances are related to the height above datum and HI and HT. A mark to mark value is unambiguous. But, do any cogo packages accept mark to mark?
Well said, but details still matter. BURKORD(TM) is a 3-D COGO package that accepts mark-to-mark distances and performs error propagation. However the current version has no least squares adjustment option. If time, talent, and money were no object to me, it would have.
So why can't the RTK processor that produced the data use the data for uncertainty? What Kent seems to imply is that the RTK manufacturers lie about the results and StarNet doesn't. If I can process the one observation RTK data later for uncertainty values (produced by the RTK software), why can't the field software do it. I mean why would one data cranker produce any better results than another? If what Kent says is true why couldn't the StarNet algorithm be installed in RTK field software (maybe the calculations already are).
> So why can't the RTK processor that produced the data use the data for uncertainty? What Kent seems to imply is that the RTK manufacturers lie about the results and StarNet doesn't.
Actually, what Star*Net and (one would think) any other decent least squares survey adjustment/error analysis program does is allow the input of realistic standard errors for observations. In the case of GPS vectors, that means inputing a factor to be applied to the processor estimates of the covariances. That factor is derived from experience with overdetermined GPS networks using similar methods and procedures.
> If I can process the one observation RTK data later for uncertainty values (produced by the RTK software), why can't the field software do it?
Probably because your RTK controller software doesn't support correcting the processor estimates by an optimism factor.
> I mean why would one data cranker produce any better results than another?
The answer is that Star*Net's approach allows reality to intrude.
So if they allow the input of a factor to correct you'd be good with it. Should it be a Kent McM factor or should we each be allowed our own? Is it a fixed number or a variable?
> So if they allow the input of a factor to correct you'd be good with it. Should it be a Kent McM factor or should we each be allow our own?
That would fix the unrealistic values of uncertainty that the Option (d) RTK users (the folks who just look at the RTK controller estimates of uncertainty) are relying upon. But why not just export the RTK vectors into a least squares adjustment/error analysis program and take it all from there? The advantage to that approach is you get to verify the optimism correction factor on every project that you survey (as I do). It's a solution that one can implement tomorrow if his RTK rig will export vectors in some usable format such as g-file.
Adjusting RTK vectors weighted with realistic uncertainties is just a much more powerful approach than settling for coordinates with better uncertainty estimates than the RTK controller provided.
You should do it!
> You should do it!
I do it on virtually every survey, Leon, but with PPK vectors, not RTK.
I mean't you should get a new GNSS RTK and enjoy the benefits. You'd have the uncertainties under control!
> I mean't you should get a new GNSS RTK and enjoy the benefits. You'd have the uncertainties under control!
LOL! For what I do, RTK would be a step down. When you figure in the time wasted with the multiple occupations just to be sure that you don't have a blundered RTK solution (not to mention shuttling the base around to maintain radio contact with the rover on a large project), you'd be better off with PPK solutions.
That doesn't mean that I'm not interested in reading about the performance of the less expensive RTK systems and, in particular, which will export vectors to Star*Net.
For what I do
That's sort of the key isn't it. Everybody doesn't do the same thing or need the same tools or, heaven forbid, have the same uncertainty tolerances.
> Everybody doesn't do the same thing or need the same tools or, heaven forbid, have the same uncertainty tolerances.
Yes, but any professional surveyor DOES need to have realistic estimates of the uncertainties in his or her work. The real question is how to get those estimates.
Many of the posters who suggest making several RTK occupations of a control point or boundary marker just to make sure it isn't blundered, still aren't able to quantify the uncertainties and so are hobbled when those positions are combined with conventional measurements as will happen. The answer, of course, is simple enough if one exports the vectors to a least squares adjustment and gives the weights assigned to the vectors a reality check.
When Texas outlaw's all RTK due to this issue let me know. I might want to get my stuff on Ebay before it loses all it's value.
Anyway I'm still planning on doing my little RTK test, maybe even expand it a little. There will probably be some interesting results but I don't expect it will yield much outside of what my tolerances are to making RTK survey measurements.
I've made many repeat shots one a dime, almost always an a quarter and I'd accept a dollar (un quantified uncertainty- sorry about that). I'm sure I've even accepted some stuff in less than good conditions that's at opposite corners of a dollar bill but for the place, time and project I made a professional decision that it was good enough. Heck, I'd rather know where a long non found section corner was in the woods to within 3 feet than not at all. Really it's the original monument that counts in a boundary survey, not how exact the deed distance is.
For most of what I do I couldn't get the work if I didn't have RTK. I'm willing, as is almost every other surveyor I know, to live with the little slop in GPS RTK measurements, quantified or not.
For those that just gotta know whether it's a dime or a quarter, go ahead and massage those G files to death. For all the extra effort you'll feel good and be tired enough from the extra work to sleep at night. If it's actually required by contract and specs, just make sure you bill for it.
> For those that just gotta know whether it's a dime or a quarter, go ahead and massage those G files to death. For all the extra effort you'll feel good and be tired enough from the extra work to sleep at night.
I get the idea that you're unfamiliar with least squares adjustments of GPS vectors. The work of adjusting GPS vectors is quite minimal, particularly when you consider the improvement in results that is possible. There really isn't any good reason not to do it.
This is why I suggested posting any RTK vectors in either Trimble Geomatics Office or g-file format (the latter with the std. error and covariance "D" record instead of the "E" record). It would demonstrate how easy it is for any Star*Net user who reads this message board to adjust the whole series of repeated vectors and check the processor-estimated uncertainties. You'll want to post the latitude, longitude, and ellipsoid height of the station occupied by the base receiver and might mention the zone of the Utah SPCS in which your project falls, just to save lookup time. All of the data should make a longish, but manageable post to this message board.
I've done adjustments, it's built into TBC and I had it in TGO. I don't have TGO anymore, my XP laptop died a couple weeks after I migrated everything to a new Win 7. I still got it backed up on a storage drive but I'm not going to try and reload it.
When you adjust radial RTK shots from a base (just hit the button in TBC) it processes the vectors. If you tune up the base coordinate and readjust or process all the end points of the vectors update. If you have static sessions between control points you can process them. Of course you need to make all the settings for how you do adjustments. I don't Merge all my shots to a point at download. I do it later. One thing I've noticed is that when you merge two shots (adjust) it just doesn't pop to the average. So it must be using what's being referred to as the G file (NGS spec) info to combine the vectors. Or maybe it's that ECEF thing you joke about, Merge two 3D points and the resultant horizontal location isn't the average of the horizontal in a 2D plane. Anyway, it's amusing.
As far as using one G file to check a shot for a blunder I'm not buying into that one. If the shot was taken under multipath conditions I don't think the G file info is going to tell you that, not without some redundant shots to compare. A single G file processed for quantifying uncertainties, sure if clean data, multipath and you just got some trusted uncertainty for a point at the wrong place.
As far as my little test, I can post some here but it would probably be best to just make all the files available for download online. DC files, rinex files, observation files, pdf files, adjustments results, TBC reports, images. I can do that.
I can't do the StarNet though, that's for you Kent.
> One thing I've noticed is that when you merge two shots (adjust) it just doesn't pop to the average. So it must be using what's being referred to as the G file (NGS spec) info to combine the vectors.
The uncertainties in the vectors will vary from session to session. GPS is like that. If the adjustment is actually using the uncertainty estimates to weight the mean, then it's unlikely that the weighted mean will be a simple arithmetic mean.
> As far as using one G file to check a shot for a blunder I'm not buying into that one.
I'd be interested to know where you got the idea that could be done. The purpose of importing GPS vectors is to check their quality when they are combined with other measurements between points connected either directly or indirectly by GPS vectors.
I'll post another example using the traverse from the project that I'm working on to (try to) make that point clear.
> As far as my little test, I can post some here but it would probably be best to just make all the files available for download online. DC files, rinex files, observation files, pdf files, adjustments results, TBC reports, immages. I can do that.
No, all that is mostly clutter. The essential elements of the RTK vectors are simply the DX, DY, and DZ values and their standard errors and correlations. That is what the few ascii lines of the g-file for each vector present and is all you really need to provide (aside from the position of the point occupied by the base.
If something turns out to be wrong with the vectors, yes, there may be more information needed, but simply adjusting the whole group of vectors between the same two points - and in the process figuring out how much the standard errors need to be scaled for realism - won't require that. This is a very simple exercise and let's keep it that way.
DX, DY, and DZ values
In TBC these are reported in ECEF. Can StarNet handle that.
A pure GPS vector is DX, DY, and DZ in ECEF. That's how the SV's (GPS/GNSS) operate. If you want Lat, Long, Ht or some projected grid that's derived from the ECEF position.
> DX, DY, and DZ values
>
> In TBC these are reported in ECEF. Can StarNet handle that.
DX, DY, and DZ are always expressed as ECEF coordinate differences. I'm unaware of another other convention for GPS vectors.