Notifications
Clear all

Collecting & Processing Static data for DGPS

27 Posts
8 Users
0 Reactions
4 Views
(@bc-surveyor)
Posts: 226
Registered
Topic starter
 

I recently switched sites and am tasked with helping dial in their control network (5km long corridor project that will last a few years and has numerous bridges and viaducts). In the past to setup a control network I would run static on a point on the start and end of the site and do a common point closed traverse between the start-end-start. Process the static observations to get a PPP and use that data along with the conventional data in a least squares adjustment program (StarNet) and voila.

They're doing things a bit different, they have a point at the start that they're holding, setting up a base on that point and running 3-4 hours of static using a rover on all their main control hubs throughout the site simultaneously as the base collects static data. And pretty much only relying on that static data to get coordinates for their control. The data that I've been given is just the .T02 files of those static observations (rover and base). They're importing those T02 files into TBC, manually entering in point numbers and heights ect and processing baselines and then using TBC to adjust the coordinates to get them into a ground system.

If I were to do this method I would setup the base and use my rover to take observations in RTK mode and bring my job into TBC, process my baselines and export that as an .asc which I would then bring into my least squares software (StarNet).

Is their a benefit of processing the T02 files of the rover observations vs running the job file in TBC?

 
Posted : April 8, 2022 4:23 pm
(@rover83)
Posts: 2346
Registered
 

Well, a several-hour static baseline will outperform a couple-minute RTK observation. That's not really in doubt. Plus you can really fine-tune static solutions, weed out cycle slips, clip time segments, wait for better ephemerides, etc.

But static works best when you have multiple receivers going at the same time, getting multiple baselines simultaneously and tying together the entire control network. Couple that with independent redundant sessions, and you'll be hard-pressed to find a better method for large-scale control networks.

But if all they have are two receivers and they are only running a single session for each "rover" point, they might as well do RTK, because even though the individual static baselines might be tighter than the RTK vectors, they're still only tied back to that "base" point. No connection to other points in the network, which makes for a weak network.

With only two receivers and a total station, the best bang for your buck is going to be that conventional traverse you talked about - but rather than just running PPP at either end, set up a base in the middle of the project and RTK every other or every third control point through the corridor. Then use the RTK vectors plus the conventional data in a least-squares adjustment. It's a blend of both methods that (assuming good procedures) will end up being tighter than both of them.

 
Posted : April 8, 2022 5:50 pm
(@michigan-left)
Posts: 384
Registered
 

"(5km long corridor project that will last a few years and has numerous bridges and viaducts)"

"main control hubs"

?ÿ

I would definitely consider setting good benchmarks and running digital levels for bridge/viaduct work...

 
Posted : April 8, 2022 7:07 pm
(@bc-surveyor)
Posts: 226
Registered
Topic starter
 
Posted by: @rover83

Well, a several-hour static baseline will outperform a couple-minute RTK observation. That's not really in doubt. Plus you can really fine-tune static solutions, weed out cycle slips, clip time segments, wait for better ephemerides, etc.

But static works best when you have multiple receivers going at the same time, getting multiple baselines simultaneously and tying together the entire control network. Couple that with independent redundant sessions, and you'll be hard-pressed to find a better method for large-scale control networks.

But if all they have are two receivers and they are only running a single session for each "rover" point, they might as well do RTK, because even though the individual static baselines might be tighter than the RTK vectors, they're still only tied back to that "base" point. No connection to other points in the network, which makes for a weak network.

With only two receivers and a total station, the best bang for your buck is going to be that conventional traverse you talked about - but rather than just running PPP at either end, set up a base in the middle of the project and RTK every other or every third control point through the corridor. Then use the RTK vectors plus the conventional data in a least-squares adjustment. It's a blend of both methods that (assuming good procedures) will end up being tighter than both of them.

Does static collect a different kind of observation versus "RTK"? For example if I setup the rod and logged an hour long observation would that accuracy be any different then logging as "static" and processing the T02 file in TBC? Or does the static observation just give you the ability to edit the observations more with the T02 file?

Interesting point of running multiple baselines at once, I'd like to look into this more. Do you know of any literature online that dives into this?

Thanks for your advice and information, I will definitely being using your suggestions.

?ÿ

?ÿ

 
Posted : April 9, 2022 5:56 am
(@bc-surveyor)
Posts: 226
Registered
Topic starter
 
Posted by: @michigan-left

"(5km long corridor project that will last a few years and has numerous bridges and viaducts)"

"main control hubs"

?ÿ

I would definitely consider setting good benchmarks and running digital levels for bridge/viaduct work...

Good point, I forgot to mention we usually do that as the last step and use that data to hold our elevations.

 
Posted : April 9, 2022 5:57 am
(@rover83)
Posts: 2346
Registered
 
Posted by: @bc-surveyor

Does static collect a different kind of observation versus "RTK"? For example if I setup the rod and logged an hour long observation would that accuracy be any different then logging as "static" and processing the T02 file in TBC? Or does the static observation just give you the ability to edit the observations more with the T02 file?

The observations are the same - the carrier phase data from satellites - but the processing is different. RTK relies upon specific algorithms optimized for solving position in real-time, static baseline processing uses different methods (and often processes the data both forward and backward with perhaps different filtering mechanisms applied). The specifics of those methods are?ÿ usually proprietary to the equipment/software manufacturer.

Yes, there are a lot more options for editing static data. Cutting out certain time periods, clipping out the first several minutes (where the solution is "bouncing around" more and has not yet converged), turning SVs on/off, changing elevation mask, turning certain frequencies on or off, etc.

Also, satellite orbits are irregular, so the navigation message being transmitted at the time of observation is only an approximation of the SV position. Ground tracking stations watch the actual path of the SV and crunch the numbers in order to produce much more accurate ephemerides or "orbits". Processing with those files will tighten up the network as well.

Because RTK is real-time processing, it can only work with the approximate positions contained in the navigation message and will always be limited compared to static solutions.

Posted by: @bc-surveyor

Interesting point of running multiple baselines at once, I'd like to look into this more. Do you know of any literature online that dives into this?

The original guidelines for network design, in the USA at least, are the 1984 Geodetic Control Guidelines.

But there are a LOT of guides and manuals online that discuss static network design. ODOT has a good one, so does Caltrans.

Ultimately network design and observation procedures are based upon the project requirements, so there's no hard answer as far as what is the "best" method for every single survey.

 
Posted : April 9, 2022 8:08 am
(@michigan-left)
Posts: 384
Registered
 

@bc-surveyor?ÿ

Since you seem comfortable post processing vectors, we've used the following method in the past when we only had two (2) GNSS receivers to work with:

Set up the base, start an RTK and Data Logging survey style (base station logs static data).

Shoot all of your control points once with RTK while logging data at the rover also (say, 3-5 minute sessions)

Post process the PPK vectors. (if good data, RTK & PPK vectors should be nearly identical)

Bring in 1 or 2 of the closest CORS and post process those stations against the base/rover data.

(Be careful here. We've had mixed results if you have one CORS really close, and one or more further away. We also pick a CORS station that strengthens network/vector geometry along an X mile corridor)

Add your traverse.

Add your levels.

Sprinkle into TBC (StarNet), bake for 30 minutes, commence bridge building.

?ÿ

Some may comment about the trivial/non-trivial (independent) baselilnes inherrent in this method, but you can always turn off some of those extra degrees of freedom because you have a traverse, and leveling you want to incorporate.

Just remember, if your jurisdiction has CORS, you have at least that many extra GNSS receivers you can use.

Another way would be to leapfrog the receivers down the corridor occupying adjacent control points.

That gives double occupation of the points, but might not be worth the extra time if you traverse too.

?ÿ

 
Posted : April 9, 2022 8:56 am
(@rover83)
Posts: 2346
Registered
 
Posted by: @michigan-left

Some may comment about the trivial/non-trivial (independent) baselilnes inherrent in this method

Unless we're processing with a true "session" or multi-baseline processor such as PAGES or some of the older programs (I think TRIMVEC was one), there's no such thing as trivial baselines.

TBC, like most commercial software, uses pairwise or "baseline" processing, so if the operator starts turning off baselines, they're losing valuable data, and when it comes time to adjust, there will be no variance/covariance terms computed for point pairs where baselines were removed.

I've never seen anyone able to point out exactly which baselines are "trivial" and why when running TBC, Infinity, etc.

 
Posted : April 9, 2022 9:31 pm
(@bill93)
Posts: 9834
 

A trivial baseline is one that is completely determined by the others you are using and therefore adds no new information.

In the simplest case, if you measure at A, B, and C simultaneously, then say you use AB and BC.?ÿ You can calculate AC from those and it adds no new information to the solution.

Adding AC to a least squares fit will give you incorrect estimates of the error because you have told the LS that it is a new measurement but it isn't.

You could use any 2 in this case. The other one is trivial.

 
Posted : April 10, 2022 5:05 am
(@jitterboogie)
Posts: 4279
Registered
 

maybe call them redundant.?ÿ that's how I was trained, trivial as a word seems to nullify them.

redundant just says good but not necessary.

 
Posted : April 10, 2022 5:25 am
 Norm
(@norm)
Posts: 1291
Registered
 

Is their a benefit of processing the T02 files of the rover observations vs running the job file in TBC?

One benefit of processing static vs RTK is the precision of the receiver. The Trimble spec sheet for the R10 indicates RTK is about 3 times that of static.?ÿ
 
Posted : April 10, 2022 7:50 am
(@michigan-left)
Posts: 384
Registered
 

@rover83?ÿ

My recollection of the measurement theory and a "properly adjusted" least squares adjustment needs the following: properly weighted (error estimates across variance groups: vectors (RTK/PPK/Static), angle sets (mean vs individual obs.), levels, set-up errors, least count of gun, etc.), non-trivial measurements, and be overdetermined (a reasonable number of degrees of freedom for redundancy, but not too many to skew the meaninfulness of the statistics, which would make the standard errors/standard deviations/2-sigma confidence interval values look way better than they actually are.).

If I remember correctly, I believe trivial baselines are "extra" (not required to solve the full # of variables) and non-trivial are the minimum required to solve for the full # of variables between the number of points.

The following formula applies to baselines:?ÿ n-1= # of non-trivial vectors per GNSS session where n = # gnss receivers

2 receivers (n) gives 1 non-trivial (required) vector to solve between point A and B. Which makes sense if A is a known point, and B is unknown. 2 receivers gives 1 baseline/vector between the 2 points.

3 receivers (n) minus 1 = 2 vectors to solve between the three points

And so on.

Any extra vector is extra (trivial; or redundant, per Jitterboogie.)

?ÿ

The real damning procedure for skewing the LSA statistics is using all the individual total station observations if you run the traverse and wrap say 3-5 sets of angles at each station. Best practice would be to use the mean of the angle/distance results per setup. Any good LSA program should allow you to choose which type of measurements, but sometimes it depends on the data collection. Trimble Access has "Rounds", so it knows that you're collecting multiple sets of angles per setup, then generates the mean of the set of rounds. Not sure what other manufacturers do for this scenario. I guess you hand calc the mean at each setup and hand enter them?

 
Posted : April 10, 2022 11:10 am
(@bill93)
Posts: 9834
 
Posted by: @michigan-left

overdetermined (a reasonable number of degrees of freedom for redundancy, but not too many to skew the meaninfulness of the statistics, which would make the standard errors/standard deviations/2-sigma confidence interval values look way better than they actually are.).

...?ÿ wrap say 3-5 sets of angles at each station. Best practice would be to use the mean of the angle/distance results per setup.

Least squares programs give better and true statistics with more redundant input, so long as the measurements have independent errors (or you figure out how to tell the program about correlated measurements), and you give it realistic standard error values for the measurements and for centering.

In the case of wrapping angles, the standard error for multiple sets would be smaller than for single or a few sets.?ÿ Give it the right standard error, and more sets are better.?ÿ

To be strictly correct, you need to move your centering slightly and reset it, so that its error is independent between measurements.

 
Posted : April 10, 2022 12:36 pm
(@rover83)
Posts: 2346
Registered
 
Posted by: @michigan-left

If I remember correctly, I believe trivial baselines are "extra" (not required to solve the full # of variables) and non-trivial are the minimum required to solve for the full # of variables between the number of points.

"Trivial" is not really the same thing as redundant. True redundant vectors increase the degrees of freedom; trivial vectors are not actually processed vectors, but a mathematical closure computed from two other vectors.

If I download an hour's worth of data from three CORS, drop into TBC and process baselines, I will have three vectors, none of which uses the exact same data as the other two. If I run a loop closure report, I will not see a perfect closure, which is what I would get if one of those baselines were truly trivial.

Site conditions, SVs tracked, and very often observation times vary between all three stations. There's some correlation between the three vectors, but those single-baseline vectors are simply not equivalent to a session-processed solution - where only common data is taken into account and the VCV matrix is computed for all observations being processed simultaneously.

Like I mentioned upthread, I still have yet to see anyone point out a valid reason for discarding one of those three vectors in TBC, based on anything other than "some manual told me to delete one of them". They all have valid statistics computed by the baseline processor; on what basis do you discard one over the other two? The worst horizontal precisions? Worst vertical? Longest? Shortest? Those all might be valid reasons to discard something based upon project specifications, but none of those things indicate that a vector is just a linear combination of the other two.

 
Posted : April 10, 2022 3:43 pm
(@michigan-left)
Posts: 384
Registered
 

@bill93?ÿ

Maybe we're saying the same thing, maybe we're not:

The OP wanted to "help dial in their control network". I think many would agree that LSA is the best tool to achieve that goal by incorporating GNSS, total station, and level data into a single network adjustment.

LSA programs give "better/truer" statistics by using more independent observations from different geometries to identify potential random errors for removal. These independent observation geometries are where "minimizing the sum of the squares of the residuals" gets all its power and strength from.?ÿ

This is not the same as "redundant input" in the form of 50 direct and 50 reverse observations for the same set of measurerment(s).

The whole point of wrapping sets is to improve the precision of the measurements to reliably and repeatedly converge around the most probable value of the truth (the mean) for that one set. Once a sufficiently precise (and hopefully accurate) mean is established, 3, 5, 10, or 50 more sets of angles/distances doesn't make that measurement any "better/truer" to the front end of your LSA for solving that set of variables calculated between the three points measured. (You could have the most refined and precise set of observations obtainable, but they could still be wrong, but you won't know that until you have an independent check.)

You are correct though, using more sets/measurements will tend to make the results look tighter. In fact, too many of those extra measurments (degrees of freedom) are actually adversely affecting your LSA because it will improperly weight them more than the value they are actually contributing to the adjustment versus an independent observation and geometry. Which is why the mean result of X number of angle/distance sets is appropriate for a proper network design/geometry. Better to use more independent observations from different geometries.?ÿ

Maybe Rover83 had a better idea by pointing the OP to the literature and letting them take it from there.

 
Posted : April 10, 2022 4:38 pm
(@michigan-left)
Posts: 384
Registered
 

@rover83?ÿ

Read my last post to Bill93.?ÿ Replace the word "sets" with "vectors"?ÿ It's the same concept.

Yes, 3 GNSS gives 3 vectors between 3 points.

What is the minimum number of vectors needed to compute the coordinates for all 3 points during the observation session??ÿ Answer: 2.

Which 2??ÿ Answer: I don't know, and it depends.

Is it better to have three vectors??ÿ Answer: Maybe

An extra vector is available from the session, but not required to solve for the position of the three points.?ÿ The extra vector is trivial.

 
Posted : April 10, 2022 4:57 pm
(@michigan-left)
Posts: 384
Registered
 

@rover83?ÿ

"Unless we're processing with a true "session" or multi-baseline processor such as PAGES or some of the older programs (I think TRIMVEC was one), there's no such thing as trivial baselines."

PAGES is not a multi-baseline processor. See next:

Files that are 2 to 48 hours in duration are processed using?ÿPAGES?ÿstatic software. Your coordinates are the average of three independent, single-baseline solutions, each computed by double-differenced carrier-phase measurements from one of three nearby CORSs.

The way PAGES works is discussed very well here:?ÿ https://www.ngs.noaa.gov/OPUS/about.jsp#processing

and here:?ÿ https://www.ngs.noaa.gov/GRD/GPS/DOC/pages/pages.html

?ÿ

Your idea of a simultaneous "session" baseline processing arrangement between all observed receivers at the same time is incorrect.?ÿ Differential GPS/GNSS positioning works because of software processing the single/double/triple differencing solutions of the observations to compute a single vector between any two receivers observing at the same time against any number of satellites. A vector is a magnitude and a direction between two points during a chunk of time.

 
Posted : April 10, 2022 5:31 pm
(@bill93)
Posts: 9834
 
Posted by: @michigan-left

too many of those extra 0measurments (degrees of freedom) are actually adversely affecting your LSA because it will improperly weight them more than the value they are actually contributing to the adjustmen

You are right that remeasuring some vectors/ angles/distances/?ÿ coordinates too many times relative to the other measurements in the network will make the statistics overly optimistic. The effort needs to be spread around in your network to get realistic results.

However it does not matter whether you use the individual measurements or their average if you give them realistic standard errors. The std err for the average should usually be the std err of each divided by sqrt(N).

 
Posted : April 10, 2022 6:59 pm
(@geeoddmike)
Posts: 1556
Registered
 

@michigan-left?ÿ

FWIW and relying on my often faulty memoryƒ??

The Program for the Adjustment of GPS Ephemerides (PAGE) was first developed to support international efforts improve satellite orbits data. It later became, as PAGE-NT, a tool for the session processing of GPS base lines.

The manual linked below was written in 2000 by now Director of the NGS Julianna Blackwell and now Chief of the Geosciences Research Division.

The screen captures were taken from the NGS PAGE-NT manual available here:

http://geodesyattamucc.pbworks.com/w/file/fetch/115417693/PNT6MAN.PDF

The basic workings of the orbit and base line processing tools were adapted for use with the OPUS tool. As OPUS is designed for the computation of a single site wrt CORS sites it does not work like PAGE-NT.?ÿ

OPUS Rapid Static was developed separately.

The implementation in OPUS Projects is much like the implementation in PAGE-NT. See the screen capture of a presentation on the tool.

47CBA022 901A 42F1 934C 03C5AAEC427C
3AC09723 76CA 4972 A6A2 6D6D55640E91
 
Posted : April 10, 2022 9:06 pm
(@michigan-left)
Posts: 384
Registered
 

@geeoddmike?ÿ

"Step 6: If everything is OK in fixed.rms then this step simply combines all the single-baseline, fixed-integer solutions into one multi-baseline solution. The pages.snx file will be created and the gfile.inp file will be created. Check the final combined.sum file and look at the final post-fit plots for each baseline. Step 6 is usually an ion-free, partially-fixed, multi-baseline solution."

The process is still using any number of single baselines to compute individual vectors between any two stations (OPUS uses x # of CORS, but only reports the 3 most coherent). Step 6 then combines all those single baseline vectors.

How does it do that??ÿ It doesn't say? Is it an average? Is it an LSA?

If you read the OPUS link I attached (diagram with the coins), it discusses what's happening and how it is merging those single baseline vectors. You can easily see this in production data if you read through an OPUS extended output file.

Calling a bunch of single baseline vectors "merged" into a single solution a "multi-baseline solution" is a bit sloppy and misleading. The software is not simultaneously solving all of the vector equations from each station at once.

Arguably, here's a bunch of single baseline vectors to an unknown point, and here's an algorithm we used to ferret out the best ones.

Just like performing a least squares adjustment from 10 fixed CORS to your unknown base station. Albeit, a little more horespower, and a little more rigorous.

 
Posted : April 11, 2022 4:35 am
Page 1 / 2