Testing RTK and Pos...
 
Notifications
Clear all

Testing RTK and Post Processing in Canopy

49 Posts
15 Users
0 Reactions
10 Views
(@shawn-billings)
Posts: 2689
Registered
Topic starter
 

About a week ago I was doing a test under canopy to determine three things:

  • The use of observation time to detect bad fixes in moderate canopy with RTK.
  • The effectiveness of post processing with relatively short observation time in moderate canopy.
  • The battery life of the spread spectrum radio I was using at the base.

When I refer to moderate canopy, this was my test site (enjoy the snazzy Dating Game sounding music):
[MEDIA=youtube]WEzjmP-Rqi0[/MEDIA]

The base was about 195 feet away in an open environment. Corrections were broadcast at 5Hz over FHSS. The test was designed to run as long as my base radio battery lasted, which was 11.5 hours. I had the rover set to collect data for at least 3 minutes with several engine resets throughout the observation. My theory, when I started this test, was that a bad fix cannot last more than about 1.5 minutes. Most only last a few seconds, but under the right (or rather wrong) conditions, it is possible to see a bad fix last longer. I've not seen it last longer than 2 minutes though. Over the course of 11.5 hours, I collected 187 points using the auto-accept and auto-restart feature in the receiver. I could then leave the receiver unattended for this test. While the receiver was crunching out the RTK observations, it was also collecting raw GNSS data, as was the base receiver. The average time on site for each point is about 3.5 minutes.

The 187 RTK points 187 total produced an average position of:
N 6835834.6382
E 3100880.3018
U 396.1117
(US Survey Feet)

Later, I sent the base raw data file and the rover raw data files to Javad's new DPOS server, which is capable of not only processing base receiver files with CORS data, but also processing the vectors from base to rover and from rover to CORS. For this test, I omitted processing to CORS as my occupation times were not long enough to produce good results.

The 187 post processed points (referred to as Base Processed or BP) produced an average position of:
N 6835834.6553
E 3100880.2777
U 396.1177
(US Survey Feet)

The difference between the average position RTK and the average position BP:
dN 0.017
dE -0.024
dU +0.006

The next question would be to know how the distribution of the 187 points was around the average...

 
Posted : 31/03/2016 3:05 am
(@shawn-billings)
Posts: 2689
Registered
Topic starter
 

I used the cluster average tool in the receiver to automatically detect the points in the group of points and create a weighted average. I then used the statistics tool in the receiver to see how the points compared to the average. I repeated the process with the post processed results.

RTK Distribution from RTK average:
100% H 0.161 V 0.323
99% H 0.100 V 0.225
95% H 0.080 V 0.162
68% H 0.050 V 0.075
50% H 0.038 V 0.056

BP Distribution from BP average:
100% H 0.160 V 0.378
99% H 0.123 V 0.299
95% H 0.082 V 0.229
68% H 0.050 V 0.101
50% H 0.039 V 0.066

I compared the RTK points to the RTK average and I compared the (Base Processed) points to the BP average. The 100% values reveal the worst of the 187 points from the average. Horizontally, the worst was 0.16 foot from the average for both RTK and BP. While the RTK shows a slight edge vertically in this environment than Base Processing. None of the 187 points collected were bad fixes, whether RTK or post processed.

Having begun my experience with precise GPS positioning using GPS only, L1 receivers, post processed, two things have been ingrained in my mind for 16 years now: 1) post processing has to be done in a very open environment, unless you are willing to occupy the point for a very long time, which still has no guarantees 2) post processing requires long observation times. For L1, it must be 30 minutes or more. For L1/L2 it must be 10 minutes or more.

Testing this new server based approach to post processing, I'm finding that some of my perceptions regarding post processing are in serious need of readjustment. This test shows post processing working very well in a fairly difficult environment with a fairly short observation time. This service is primarily designed for RTK surveyors who exceed communication range. You can store raw data at the rover and post process later without needing additional software and no need to download files to a PC and manually process the data. It's all managed within the rover. I had the feeling that users would be losing some accuracy and ability to work in challenging environments when using post processing based on my old perceptions. What I am finding is that the RTK and Post Processed results are practically equivalent.

I've pushed the system in worse places than this and find that if the RTK can get any decent fix at all (even if it cannot maintain the fix or complete the verification processes in the receiver), that post processing will also produce a good fix. The RTK engine and the Justin engine work differently which I believe will have the added benefit of providing an additional level of verification to results with very little additional effort on the user's part.

 
Posted : 31/03/2016 3:28 am
(@shawn-billings)
Posts: 2689
Registered
Topic starter
 

Time on Site for RTK
A few years back, I was using another system and doing a topo survey in the open with RTK. The edges of the 50 acre site had trees around the perimeter. As I got close to the perimeter, I apparently got a bad fix, I collected about 5 points from that bad fix. I found it when I got back to the office and looked at the contours in the area. Something just wasn't right. We went back and collected those five shots again.

The rover picked up the bad fix when I was close to the perimeter under a few sparse pine trees and kept it for about a minute, before the solution unraveled and the rover got a new fix. This was my first inclination that bad fixes have a limited life span. Eventually the satellite geometry changes enough for the conditions that allow a bad fix to sneak past the processor go unnoticed. How long is this? I'm sure that there is a way to look at the rate of change in cycles over time for GPS signals as a satellite moves across the sky.

But I'm a surveyor, not an electrical engineer or physicist. So I rely on experience. So far, I believe the longest running bad fix I've seen is about 80 seconds. I'm sure this depends on the receiver and the signals being tracked, but from my experience, this is what I've observed. Often times the conditions for a bad fix last for a much, much shorter duration and can be trapped by reinitializing within seconds of the bad fix. This seems to capture the majority of problem fixes. I've been implementing a 3 minute observation time in harsh environments (unless I can provide other tests). And I've found this is a very sure way to detect bad fixes. Some of my colleagues have shortened this to two minutes and this may ultimately be enough, but I'm erring on the side of caution and looking to back into a shorter time from the side of excess.

Much like the post processing results above, I cannot tell you that you will have the same, or even similar results with your own gear by observing for three minutes in canopy, but I do believe that there have been improvements in our equipment that may have been going unnoticed for years. The recent discussion about time on site with dual frequency receivers reveals this. We're still thinking like it's 2000 and much has been done since then to improve the technology we use. I encourage operators everywhere to spend some time doing tests like this to understand the breaking point of your technology and your understanding of the equipment you are using.

 
Posted : 31/03/2016 3:44 am
(@nate-the-surveyor)
Posts: 10522
Registered
 

Does the Static File start and stop, along with the RTK observations, or can it run independently, for the full session time, UN-interrupted? I'd want it to be able to do BOTH. At my choosing.
Thank you,

Nate

 
Posted : 31/03/2016 3:53 am
(@shawn-billings)
Posts: 2689
Registered
Topic starter
 

At the rover, the static files starts at the start button and ends when the accept/reject option appears.

I thought I might want the uninterrupted session file too for stop-and-go style processing. This was because I was sure that post processing couldn't do what RTK could do, getting a fix in a very short amount of time. I thought the additional time might give a little more help to the obviously inferior post processing method. However, I'm discovering that post processing isn't inferior and doesn't need the help.

 
Posted : 31/03/2016 3:58 am
 adam
(@adam)
Posts: 1163
Registered
 

Nice tunes Shawn, I opted for the Birds and the beat of a six drum band for my background music.B-)

[MEDIA=youtube]aJ1MVRb1dEY[/MEDIA]

 
Posted : 31/03/2016 4:39 am
(@nate-the-surveyor)
Posts: 10522
Registered
 

So, you cannot turn it on, to run un interrupted, in the background, while doing a number RTK shots?

Another question. Can you do this all day long, getting background Post Processed Static, simultaneously, while doing your normal RTK?
I wanted to do that, a long time ago. It looked like TDS was headed that direction, then, development changed directions.

N

 
Posted : 31/03/2016 4:42 am
(@nate-the-surveyor)
Posts: 10522
Registered
 

That is, storing, post processed, raw data, at all times, that RTK is being done.

 
Posted : 31/03/2016 5:10 am
(@shawn-billings)
Posts: 2689
Registered
Topic starter
 

Nate The Surveyor, post: 364896, member: 291 wrote: So, you cannot turn it on, to run un interrupted, in the background, while doing a number RTK shots?

That's correct. One raw data file for one RTK point.

Nate The Surveyor, post: 364896, member: 291 wrote: Can you do this all day long, getting background Post Processed Static, simultaneously, while doing your normal RTK?

Yes. There are two options for users:

  • Set Record Raw GNSS Data on in "What to Record". Every point collected will have a raw file attached to it. This is set in the individual action profiles, so you can have one profile that has this enabled (such as for control points) and another profile that does not have this enabled (such as topo points that you will likely not post process).
  • Turn on Activate Post Processing (APP) for after Five Minutes. With this enabled, the receiver immediately begins recording a raw data file on Start, then after five minutes, a tone sounds a button appears allowing the user to store the point (regardless of solution type, i.e. fix, float, standalone,) along with the raw file. The point is earmarked for Post Process Pending (PP) in the points list. Then later, after base raw file has been downloaded, the file is sent to DPOS, processed and the resulting vector replaces the coordinate of the point automatically.

So you can always record raw data or only record raw data conditionally.

In either case, the raw data file is only recorded during the point observation. At the end of the point observation, the raw file is closed.

 
Posted : 31/03/2016 6:05 am
(@felipe-g-nievinski)
Posts: 42
Registered
 

If using the same base station for RTK and PPK (post-processed kinematic), PPK should be no worse than RTK. After all, in RTK you might have less data than in PPK, because of the data link issues (latency, drops, etc.).

The fact that your RTK statistics are slightly better than PPK may indicate that RTK is applying a more stringent QC than PPK, i.e., discarding bad results before the statistics are computed -- do you really have 187 points in both RTK and PPK? Also, it wasn't clear if your post-processing was done in separate static sessions or as an un-interrupted stop-and-go type of session.

If you wish to speed things up, instead of waiting for the satellites to move, try changing the antenna height (e.g., raise the mast by 1.5 meter), which should change the geometry sufficiently. In fact, it'd seem advisable to adopt the following procedure under canopy: collect three points at antenna heights 1, 2, and 3 meters (with 1-min collection time each), then average the three together (after adding/subtracting 1 m in height).

-FGN.

 
Posted : 31/03/2016 6:07 am
(@nate-the-surveyor)
Posts: 10522
Registered
 

As was stated before, the primary purpose of this mechanism is to obtain a shot, out of radio range.

I have discovered that: Since I got the Javad system, I go places NEVER BEFORE possible, and WITH confidence, because, the quality checks are there. And, some of these places are in forests, where radio does not penetrate well. Also, I have found that the Javad radio, when broadcasting at 5 hz, and carrying such a load of information, sometimes does not reach the rover as well as it does in 1 hz mode. So, this mechanism of processing seems (Correct me if I am wrong) more related to loss of radio, than a desire to escape RTK protocol. That is NOT to say that possibly the static file generated yields a more robust solution. It probably does.
Since I have bought into the Javad "We can do this better" concept, and I am seeing that it works thus far, I am very excited about this new item... countdown to release!( In a week or so, is the estimate.)

Nate

 
Posted : 31/03/2016 6:29 am
(@shawn-billings)
Posts: 2689
Registered
Topic starter
 

Felipe G. Nievinski, post: 364919, member: 10769 wrote: If using the same base station for RTK and PPK (post-processed kinematic), PPK should be no worse than RTK. After all, in RTK you might have less data than in PPK, because of the data link issues (latency, drops, etc.).

I agree. RTK also looks epoch by epoch, while post processing can process results through the entire session forward and backward. It makes sense that post processing would be superior to RTK, even with short data sets and difficult environments, but this is not the perception of post processing in our industry, I think.

Felipe G. Nievinski, post: 364919, member: 10769 wrote: The fact that your RTK statistics are slightly better than PPK may indicate that RTK is applying a more stringent QC than PPK, i.e., discarding bad results before the statistics are computed -- do you really have 187 points in both RTK and PPK? Also, it wasn't clear if your post-processing was done in separate static sessions or as an un-interrupted stop-and-go type of session.

It's possible, but I don't think this is the case. More likely is the fact that the RTK rover in this example has six processing engines running simultaneously. Each fixed engine contributes to each epoch in a position. I suspect that these six engines are smoothing out the epoch by epoch results.

I do have 187 individual points. Each point has an RTK position and a simultaneously collected raw data file. The raw data was post processed with the base and the post processed positions have been compared with the RTK positions. In the software, each point has two coordinates. The software picks the one to use based on number of epochs in the solution. The user can override this and select the other solution if they wish. The software also compares the RTK and BP points, flagging those with residuals that exceed a user defined tolerance.

Each point that was post processed was an individual session. On average, these short sessions were about 3.5 minutes long, but some were longer. The duration depended on how long the receiver required to pass all of the real time verification processes I had enabled (multiple engine fixes, time on site based on number of engines fixed, final reset of engine prior to accepting). As the satellite count dwindled, some of the points required more time than others. All were a minimum of 3 minutes though. Currently the server software I'm testing does not support stop and go and at this point, I'm not sure I would advocate it. The file size for the rover would be enormous for a full work day and I'm not sure at this point there would be much practical benefit. There is a lot of data stored in the rover file raw file for this receiver beyond the observables from the satellites.

Felipe G. Nievinski, post: 364919, member: 10769 wrote: If you wish to speed things up, instead of waiting for the satellites to move, try changing the antenna height (e.g., raise the mast by 1.5 meter), which should change the geometry sufficiently. In fact, it'd seem advisable to adopt the following procedure under canopy: collect three points at antenna heights 1, 2, and 3 meters (with 1-min collection time each), then average the three together (after adding/subtracting 1 m in height).

It may be worth testing, but currently the receiver does a very good job of detecting bad fixes by resetting the engines at various times throughout the collection, as you can see in Adam's video above. Watch the upper left corner of the receiver screen at the beginning of the collection and you will see the receiver indicate Fix then FLT again and again. Adam has his receiver set to require 25 repetitions of this before finally allowing the receiver to run without resets until the end of the point collection, which he then requires one more engine reset. No need for manipulating the antenna. The receiver does what we've always considered "good practice" in a fully automated approach.

 
Posted : 31/03/2016 6:31 am
(@shawn-billings)
Posts: 2689
Registered
Topic starter
 

Nate The Surveyor, post: 364921, member: 291 wrote: So, this mechanism of processing seems (Correct me if I am wrong) more related to loss of radio, than a desire to escape RTK protocol. That is NOT to say that possibly the static file generated yields a more robust solution. It probably does.

That's correct. We were looking to handle loss of corrections due to communication outages. I expected that there would be some sacrifice to performance. I'm finding that this is not the case though. Performance seems to be as good with post processing (and in a few cases better) than with RTK. The only sacrifice is results come later rather than in real time.

 
Posted : 31/03/2016 6:35 am
(@randy-rhodes)
Posts: 14
Registered
 

Be very careful!! I have discovered that using observations as short as 1:30 will give you great numbers post processed using topcon tools, however when they are compared with conventional positions or with verified positions they can be really off ( i have seen as much as 10' ). I currently am using post processed points with observations of at least 10 minutes if i cannot get a verified position with the ls .

 
Posted : 01/04/2016 3:32 am
(@nate-the-surveyor)
Posts: 10522
Registered
 

Randy, I like the way you think. It is the way all connoisseurs of equipment should think. I remember seeing the EDM for the first time. It was a Beetle. I was probably around 8 yrs old. I was thinking..."How do they know that thing is right?" It was years later, that I saw an EDM lie. It happened to me, with an autoranger S model. It still shows on a survey, done by me, and signed by my dad. An error of some 40 feet or so. A sideshot to a section corner. I know it is there. The farmer ran me off.... long story, so I never went back. Year was 1986, in the spring, on a survey West of Mena AR.
It is the DUTY of every surveyor to investigate the the reliability of his methods, tools, and procedures. Post Processed Static, RTK, or the like. You are on the right track.
When I first got into RTK GPS, I bought the toughest, and best for my job RTK that I could find. I bought the Topcon Legacy E system. I went in with BOTH feet firmly planted with a few ideas.
1.) ALL GPS will lie.
2.) In adverse conditions, it will lie more.
3.) The harder it works, the MORE likely it is to be lying.
4.) All shots, under canopy must be verified, or treated with less confidence. (Or, with confidence, that some shots are wrong!)

I have used my Legacy E system now for some 10 + years. In some environments, I consider it to be lying 1 time, out of 10 shots. These environments require 3 independent shots. There MUST be some sort of change in the environment. In areas that have lots of vertical tree trunks, a horizontal move of the rover is needed to check things. IT WILL remain fixed for several minutes, with the wrong INIT. Storing this multiple times, without loss of Init, is not redundancy. It will merely provide an average of the wrong location.... (Like listening to the same false witness, on the witness stand, as he digs in deeper!)

If you are around oaks, with lots of horizontal limbs, a VERTICAL change in antenna height, will usually catch things.
Knowing it lies, sets the stage, to use it.

The science of Proving your tools, is a normal part of using said tools.
GPS is no different.

"Go and sin no more"

I bought the Javad system.

Good surveyors are not trusting surveyors.

Randy, you are a good surveyor.

Nate

 
Posted : 01/04/2016 4:37 am
(@shawn-billings)
Posts: 2689
Registered
Topic starter
 

Randy Rhodes, post: 365056, member: 10669 wrote: Be very careful!! I have discovered that using observations as short as 1:30 will give you great numbers post processed using topcon tools, however when they are compared with conventional positions or with verified positions they can be really off ( i have seen as much as 10' ). I currently am using post processed points with observations of at least 10 minutes if i cannot get a verified position with the ls .

Randy,
The warning is very valid. At some point the processors will fail to deliver good results: environment, constellation, distance to the base, frequencies used, processing techniques, time on site, etc. What is that point? I'm not sure. I had an idea of what that point was, but I'm pleasantly surprised to find that my idea was pessimistic. The only way I know to prove it out is by experimentation. The limitations of experimentation is our ability to observe and our ability to extrapolate the results to future scenarios. For instance, you will never find identical canopy. This is a rogue variable in our equation. I can't say with certainty that 3-4 minutes of raw data under moderate canopy will always yield a successful vector. But I've had enough success with DPOS in varied environments to suggest my chances of success are very, very high.

Another variable is processor type. You are using Topcon Tools, I'm using a newly enhanced Justin engine. The developers will use different techniques in the software to solve the vectors. So I can't say for sure that you will have the same results with TT as I will with DPOS. A friend of mine has processed some data for me with Topcon Tools in adverse conditions and produced very good results, but this is anecdotal experience and not a rigid test.

This is why it is necessary for users to do these experiments with their equipment, in their environments with their procedures. I know that I can eventually break Justin by reducing time on site, pushing into deeper canopy, increasing to longer vectors, etc. But I'm very surprised to find those limitations are far less than I expected.

 
Posted : 01/04/2016 7:12 am
(@dmyhill)
Posts: 3082
Registered
 

Shawn Billings, post: 364882, member: 6521 wrote:

The next question would be to know how the distribution of the 187 points was around the average...

My next question is how shooting the point directly from the point 195' away compared. Right now, total station traverse, with correct procedures, is the gold standard for accurately measuring those types of distances. I would be fascinated to know how the answers compared to a traditional traverse.

 
Posted : 01/04/2016 2:39 pm
 adam
(@adam)
Posts: 1163
Registered
 

dmyhill, Point 10 from my video is a rebar and cap set in concrete. A control point at my house. I set these with a total station and also leveled them too.

point 10 - N 631324.37ft
E 1245153.3994
OH 989.01

I repeated the procedure in video on this point with the LS about 50 times and averaged all the points

LS point 10 Avg -N 631324.44
E 1245153.50
OH 988.95

 
Posted : 01/04/2016 2:50 pm
(@shawn-billings)
Posts: 2689
Registered
Topic starter
 

dmyhill, post: 365171, member: 1137 wrote: My next question is how shooting the point directly from the point 195' away compared. Right now, total station traverse, with correct procedures, is the gold standard for accurately measuring those types of distances. I would be fascinated to know how the answers compared to a traditional traverse.

I'm hoping to have time tomorrow to get a position on this point using total station. I am interested in this result as well. But the main exercise for me was much more binary, were the fixed solutions all good or were there intermittently bad fixes. I am confident in reporting all fixes were good.

It is a good question though.

 
Posted : 01/04/2016 3:36 pm
(@dmyhill)
Posts: 3082
Registered
 

Shawn Billings, post: 365185, member: 6521 wrote: I'm hoping to have time tomorrow to get a position on this point using total station. I am interested in this result as well. But the main exercise for me was much more binary, were the fixed solutions all good or were there intermittently bad fixes. I am confident in reporting all fixes were good.

It is a good question though.

In my mind, the .3' vertical would not be "fixed", unless the reported RMS reflected that.

If you do indeed flesh out the investigation, including the correlation (if any) of the actual error to the modeled error would be of great interest.

I believe that the term "fixed" really only has merit to me when the modeled error is a real world number.

 
Posted : 01/04/2016 5:18 pm
Page 1 / 3