Notifications
Clear all

Combining GPS and Conventional Measurements

38 Posts
17 Users
0 Reactions
11 Views
(@andy-nold)
Posts: 2016
Topic starter
 

I am evaluating software and trying to make some decisions about our data collecting and processing policies here at the company. I think the the SurvNet software that we have, I am ready to jump into combining GPS and Conventional observations in least squares adjustment, which is new to me. Doing some browsing on the subject here in the archives. Can anyone think of a particularly educational string of posts about it or perhaps on the old RPLS.com? I've been in a rut with the non-survey work I was doing at the oil company and I need to dust off the cobwebs.

 
Posted : July 19, 2013 8:32 am
(@paul-in-pa)
Posts: 6044
Registered
 

"Road Trip"" And Buy Kent Some Beer

Kent is considering new EDM but I would not know if he has checked out any manufacturer's specific LS programs.

Deral may be too far removed from the daily grind.

I think SurvNet is still top dog but I have not used any instrument proprietary programs which have been improving.

I am not a fan of the proprietary lockout of other systems that is going on.

Paul in PA

 
Posted : July 19, 2013 9:53 am
(@kent-mcmillan)
Posts: 11419
 

"Road Trip"" And Buy Kent Some Beer

> Kent is considering new EDM but I would not know if he has checked out any manufacturer's specific LS programs.

I don't think that Kent would ever consider using some manufacturer's proprietary survey adjustment software that potentially locks out data from other sources. That's the beauty of Star*Net. You can have it all under one roof, everything from heritage data from a paper field book to whatever the flaveur-du-jour you're using is.

If you find that you're having a hard time figuring out how to adjust GPS and conventional observations in combination, it's probably the software that's the problem. The adjustment of various types of observations together shouldn't be much different than adjusting them separately.

 
Posted : July 19, 2013 10:14 am
(@andy-nold)
Posts: 2016
Topic starter
 

"Road Trip"" And Buy Kent Some Beer

That reminds me... Every once in awhile I see some good discussions on here and I will actually save them for future reference. I will need to see if that isn't one of the discussions I have on the hard drive already.

Kent has been a valuable resource over the years for advice and has certainly shared his knowledge with me when asked.

 
Posted : July 19, 2013 10:14 am
(@andy-nold)
Posts: 2016
Topic starter
 

"Road Trip"" And Buy Kent Some Beer

I've just never done it. Also thinking about how to structure the traverse to take full advantage of combining.

 
Posted : July 19, 2013 10:15 am
(@kent-mcmillan)
Posts: 11419
 

"Road Trip"" And Buy Kent Some Beer

> I've just never done it. Also thinking about how to structure the traverse to take full advantage of combining.

It will depend upon the project. The ideal situation is to get GPS vectors to all points. Centimeter-level solutions are usually fine. When the conventional measurements between them are added, the whole works tightens up since the GPS vectors control overall scale and orientation and the conventional measurements improve local uncertainties significantly.

 
Posted : July 19, 2013 10:21 am
(@tom-wilson)
Posts: 431
Customer
 

"Road Trip"" And Buy Kent Some Beer

Check out the Carlson WEB site, they have two webinars on their LSA software, one is with mixed data. Let us know how it all turns out.

T.W.

 
Posted : July 19, 2013 10:31 am
(@dan-dunn)
Posts: 366
Customer
 

Both Star*Net or Carlson SurvNet will do what you wish to do. I have used both Star*Net (pre Micro Survey) and SurvNet (current version), both are good solid programs. If you are using Carlson Survey I would choose SurvNet since you already have it. If you are using some other software then download the demos and try them both.

I have found that when combining GPS and conventional data, the most important thing to understand is the use of projections, be it State Plane, LDP or any other projection.

 
Posted : July 19, 2013 12:46 pm
(@kent-mcmillan)
Posts: 11419
 

> I have found that when combining GPS and conventional data, the most important thing to understand is the use of projections, be it State Plane, LDP or any other projection.

Beyond the need for projections when GPS vectors are adjusted, probably the most important aspect of adjusting conventional and GPS data is getting the various classes of observations properly weighted. Ideally, the GPS can be adjusted separately and has sufficient redundancy that the weighting scheme for the GPS vectors can be validated. Failing that, the surveyor should have sufficient experience on other projects with the same methods and equipment to know that weights assigned are realistic.

Doing a good job of weighting conventional measurements ideally means testing the instruments and methods to see what standard errors for the various measurement components, including: target centering, instrument centering, horizontal angles, zenith angles or height differences, and distances are realistic. It is a much better idea to test the easy stuff such as target and instrument centering and the uncertainty of angle measurements separately to determine expected values for their standard errors.

 
Posted : July 19, 2013 12:58 pm
(@shawn-billings)
Posts: 2689
Registered
 

I've been mixing GPS and conventional data for several years with Least Squares. I've not found a better way to do it. Attempting compass rule adjustments between GPS pairs is a pain (which was my best method before I went to LSA).

We started with Columbus, which is really powerful but not very user friendly. For our work, I probably don't do more than three or four jobs with LSA, which means I have to relearn the user interface. Recently we got Carlson Survey that includes SurvNet. I like it quite a bit. Uses RW5 natively, so no weird conversions. Imports post processed vectors. And is pretty easy/quick to get good results.

I found that the beauty of LSA with GPS/conventional data is that you are really stuck with any rigid rules on network construction. Don't extend conventional traverses beyond the length of the GPS pair you start from (don't run 1000 feet from a 500 foot pair). I like to use minimally constrained adjustments to test my data. Perhaps you run a route traverse with a GPS pair on each end and a GPS point in between. Don't give the processor the pair on one end or the mid point. Then give it both pairs without the mid point. It always amazes me how tight that midpoint agrees. Then for final adjustment, I put in the mid point.

All of our traverses are 3D now, and have been for a few years as well. Carlson doesn't require it, but I like the way the adjusted data works with verticals included.

Last, although not required for LSA, I have found that low distortion projections are the cat's meow for mixing GPS and conventional data. LSA software supports LDP's making them easier to incorporate.

 
Posted : July 19, 2013 1:09 pm
(@tom-adams)
Posts: 3453
Registered
 

> Don't extend conventional traverses beyond the length of the GPS pair you start from (don't run 1000 feet from a 500 foot pair). I like to use minimally constrained adjustments to test my data.

I'm not sure why that's harmful....?

 
Posted : July 19, 2013 1:13 pm
(@dan-dunn)
Posts: 366
Customer
 

> > the surveyor should have sufficient experience on other projects with the same methods and equipment to know that weights assigned are realistic.
> >

A very good statement for both conventional and GPS measurements.

 
Posted : July 19, 2013 1:21 pm
(@shawn-billings)
Posts: 2689
Registered
 

"harmful" might be an extreme take on the warning. every pair has error. keeping the traverse extension shorter than the pair keeps you from magnifying the error in the pair. if I have a cm error in the 500 foot vector, extending that 1000 feet I run the risk of having a 2cm error at the end of it. more a personal rule of thumb.

 
Posted : July 19, 2013 1:22 pm
(@kent-mcmillan)
Posts: 11419
 

> Last, although not required for LSA, I have found that low distortion projections are the cat's meow for mixing GPS and conventional data.

Except if you're using professional-grade least squares adjustment software and you're working in 3-D, you can mix conventional measurements and GPS vectors perfectly well in the standard State Plane projections with no loss of accuracy. Programs like Star*Net, for example, calculate the scale factor at all points in the network and use the scale factor specific to each line where a conventional distance measurement was made to reduce it to a grid distance. That is a rigorous approach, not a one-size-fits-all approximation.

If some least squares adjustment software is easier to use in a surface-ish projection, then something is wrong with the software, I'd have to suppose.

Star*Net makes choosing a project average scale factor to be used in reporting surface distances a simple matter too, since it lists the scale factor for every point positioned on the survey and gives an average value.

 
Posted : July 19, 2013 2:00 pm
(@shawn-billings)
Posts: 2689
Registered
 

🙂

I bet even StarNet supports the voodoo art of custom projections. Or did they provide you with a special version that grays out those features?

>
> If some least squares adjustment software is easier to use in a surface-ish projection, then something is wrong with the software, I'd have to suppose.
>
> Star*Net makes choosing a project average scale factor to be used in reporting surface distances a simple matter too, since it lists the scale factor for every point positioned on the survey and gives an average value.

surface-ish projection ~ project average scale factor

potayto ~ potahto

 
Posted : July 19, 2013 2:11 pm
(@kent-mcmillan)
Posts: 11419
 

> I bet even StarNet supports the voodoo art of custom projections. Or did they provide you with a special version that grays out those features?

Sure, a person can gin up a custom projection to make life difficult, but standard projections are so much more useful for land surveying work that I've never felt the urge.

>
> surface-ish projection ~ project average scale factor
>
> potayto ~ potahto

Not really. If you keep your project in the Texas Coordinate System of 1983 and report the coordinates in that system, you've done a really good thing. Reporting the distances as surface distances is for the benefit of the folks who don't savvy projections at all. They can still do whatever it is they're going to do, but the most useful part of the information, the relation to NAD83 remains intact, free of the annoyance of twenty different projections on twenty tracts adjoining the one you're surveying.

 
Posted : July 19, 2013 2:37 pm
(@paul-in-pa)
Posts: 6044
Registered
 

Structuring A Traverse?

Assume a Section with mild terrain, 24 traverse points near the perimeter, 880' average separation. GPSing all 24 is extreme. Consider locking in your corners, not just the extreme corner point but the adjacent points as well. Just considering that corner you have 3 positions, 3 vector distances, 2 distances and an angle. I like that redundancy.

What does it take in GPS? What do have more of, time, equipment or money?

The most I have personally put on a project is 3 L1/L2 and 7 L1 receivers, one simultaneous occupation. The most points expanded over time, probably 8 L1/L2 and 8 L1. Small project 3 GPS on 3-6 traverse points.

For good use of GPS time consider 1 L1/L2 and 2 L1 teamed up. If that is what I had for the 24 traverse section project and assuming I had to use OPUS not OPUS-RS and I was alone. Assume TP 1, 7, 13 & 19 were the corners, assume L1/L2 occupation 2.5 hours, L1 0.75 hours. Never plan for minimum time, you want observations that add value to your adjustment.

L1/L2 TP1, L1 TP24, L1 TP2; L1 TP7, L1 TP13
L1/L2 TP19, L1 TP7, L1 TP13; L1 TP18, L1 TP20
L1/L2 TP13, L1 TP12, L1 TP14; L1 TP6, L1 TP8

One day in field, 3 OPUS, 12 GPS positions.

With 2 L1/L2 and 1 L1 you can be more productive, but with 2 and 2 and benefits of redundancy kick in. Do not undervalue the much cheaper L1 receivers in a static mix.

With a single L1/L2 receiver you only get OPUS positions but no GPS vectors meaning your positional error allowance is greater.

Sky visibility is not always equal along a traverse, if your GPS point has to be a side shot improve it with observations to it from 2 traverse points or occupy it and tie into a monument you shot from another TP.

With the abundance today of cell phone towers you probably can get azimuth shots from at least 1/2 your traverse points, so consider that in you project design.

Benefits increase with a single CORS within 60 miles for L1/L2 as it is worthwhile to add it to your static solution. A CORS within 12 miles improves your L1 static solution. Enough CORS to allow OPUS-RS probably gives you both of the above, shorter L1/L2 observation times and better OPUS-RS precision.

Paul in PA

 
Posted : July 19, 2013 2:48 pm
(@loyal)
Posts: 3735
Registered
 

"but the most useful part of the information, the relation to NAD83 remains intact"

As it does with ANY NAD83 LDP as well!

If you haven't figured that out yet, then you haven't been paying attention this last 15 years or so.

o.O
Loyal

 
Posted : July 19, 2013 3:03 pm
(@dave-karoly)
Posts: 12001
 

I do it on almost every project in StarNet pro.

In project settings under the GPS tab tell it to multiply the error estimates of the vector typically 5 times. The GPS statistics from the processing software are too optimistic especially compared to total station observations.

 
Posted : July 19, 2013 3:05 pm
(@kent-mcmillan)
Posts: 11419
 

> As it does with ANY NAD83 LDP as well!

Except you forgot the important part. The metadata. The one-off custom projection has that intrinsic overhead that on every item that you generate for some project, maps and metes and bounds descriptions, you have to give all the correct metadata describing the transformation parameters. If any of them are screwed up, someone in the future has won a chance to spend perhaps half a day trying to see if they can unscramble the mess. Naturally, the magnitude of the coordinates gives no clue. And even if metadata is actually complete and legible, the next user has to waste time converting your little projection into a projection that is useful to them. Hence standard projections.

The power of standard projections remains being able to blow off the goofy recitals of metadata on everything and the coordinate values themselves tell you something about the projection.

 
Posted : July 19, 2013 3:14 pm
Page 1 / 2