Activity Feed › Discussion Forums › GNSS & Geodesy › What causes GNSS blunders when you have "fix"?
-
What causes GNSS blunders when you have "fix"?
Posted by bc-surveyor on July 14, 2024 at 1:47 pmI’m sure we’ve all seen it before. A point gets staked or tied & when you go back to check it, you’re a meter or so out even though you have fix and the DC was reporting low residuals at the time of observation. Typically I’ve seen this in “poor” GNSS conditions (overhead trees, buildings nearby, ect).
What exactly causes this?
- This discussion was modified 2 months ago by Wendell. Reason: Moved to more appropriate forum
RobertUSA replied 1 month, 3 weeks ago 13 Members · 27 Replies -
27 Replies
-
Define ‘fixed’
It can be just that the software is able to repeatedly calculate the same position a number of times a number of ways.
So for example the same multipath conditions producing the same numbers.
You are right in that it is often a product of poor reception conditions.
Other than avoiding bad practice and breaking out the total station when you should
The best counter is time – ie: observations long enough that the constellations change or repeated with a reasonable time delay.
That is why most Cadastral authorities around the world require at least two time-separated control measurements when using GNSS
-
My guess is latency. There is a second or two between the time you make the decision to record and the actual recording happening. In that time things can go bad.
-
Second vote for multipath but never had RTK GNSS tell me it’s fixed and been 1m different when I come back half hour+ later for second shot??
With base/rover (<2km baselines) I’m worried if I get more than 0.02m HZ and 0.03m V. Only time I had issues with height blowing out 0.1m between two fixed shots was using network RTK.
-
Only see it in heavy canopy these days.
In the early RTK surveys, we would see it all the time, in open sky. Those bad fixes might stay fixed for 1/2 an hour or more. Usually, those were off a wavelength (8″), I’ve not see them off 1m or more.
But the ones today in canopy are flashing at you, POTENTIONAL BAD FIX, POTENTIONAL BAD FIX.
If they get used without some verification it’s all on the surveyor.
I only see the advantage of another shot after waiting some time to give satellites enough movement to “find” some gaps to “see” the receiver better. Coming back after a certain time has never produced any better results from our testing using the later GPS units that can see all or most of the satellites. It’s not really a thing anymore. I’ve also never seen two bad fixes that have the same position.
-
By “fix” I meant the data collector software has determined a fixed integer ambiguity for the number of carrier cycles from the receiver to the satellites.
Norman, can you expand on that a bit?
I’ve seen this a few times. I’ll attached a photo of the most recent case. Point 301502 shows a fixed solution type with 0.064′ precision over a 59 second observation. But since I was staking out a control point, I knew it was way off. As you can see from the plot, it was over 6′ out in all axis. The precision was in fact, ok, but it was “stuck” in the wrong position.
Of course, these are bad GNSS conditions, this is part of a test I’m doing. I’m just wondering what exactly is causing this. I’ve seen this before when doing checks in other poor GNSS conditions with different receivers.
-
FALSE FIX.
Apparently, receivers such as JAVAD, Trimble R12, Hemisphere S631, Carlson BRx7 help mitigate this where most do NOT or CANNOT.
Keep an eye on your PRECISION… obviously… if it’s not within tolerance, you know it’s a FALSE FIX, try again. Especially under canopy and multipath issues.
-
A point gets staked or tied & when you go back to check it, you’re a meter or so out even though you have fix and the DC was reporting low residuals at the time of observation.
It wasn’t fixed. Not really. Dump the solution (cover antenna, break connection, etc.), re-acquire, reshoot, and compare. Preferably put some time between observations. At this point with the number of sats, even 10 minutes would probably be enough.
“cooking” for 59 seconds isn’t the same thing
Remember, your eyes are the most important sensor, and your brain the most reliable data collector you have. Multi-path cannot be eliminated by any receiver. It is a fact of physics. When you eyes and brain tell you that you are in a multi-path environment, you are. Mitigation and additional checks must occur, no matter what the brand name of your receiver. (Some just do a better job of automatically mitigating and checking.)
-All thoughts my own, except my typos and when I am wrong. -
<div>
</div>Point 301502 shows a fixed solution type with 0.064′ precision over a 59 second observation.
What was the vertical precision? I have seen .05′ fixed horz, with a “fix” of .3′ vert. Take half a roll of nickels (I prefer dimes…) If the “fix” doesn’t fit in in that stack…it probably isnt really a fix.
-All thoughts my own, except my typos and when I am wrong. -
Ahhhh yes I think you may have hit the nail on the head here Tim. This makes sense.
-
DMY, would it be more appropriate to say that the receiver had fix (it determined a whole number of carrier cycles) albeit an incorrect one? I would suggest this because I would assume the receiver would know if is using a whole integer or not. If this is the case, my question would then be, how would we know how many “distance measurements” were being influenced by incorrect ambiguity resolution? Would it only take 1 to throw the result out by 6′ in each axis like I saw here? Can the receiver not determine that the calculated position wouldn’t really be fitting very well with the other “distance measurements”?
Of course in the real world I wouldn’t just take one shot in these conditions and go on my merry way. I’m more so trying to understand exactly what is happening in the back end of the coordinate determination.
-
1. Multipath
2. Latency
3. Poor Ionospheric or tropospheric conditions
3. Solar flare or spike
4. RF interference
These are a few things. I honestly think we have moved to a different solution with newer receivers and firmware as the term I like to use if being FIXED. However, that is no longer the best way to describe what the receivers are saying. Initialized is another one that we have all used historically. The old receivers GPS only and even those with GLONASS could have a bad AKA Fixed position. The new receivers can as well they are just less likely to in theory.
We should probably gain a better understanding of what is the resolved ambiguity resolution the integers have been solved. This in most basic sense means that within a certain confidence a number of epochs repeat on themselves within a certain tolerance the green check aka fixed solution is derived because time has been solved for from various satellites. A lot more goes on but to keep it simple. This means we can still get this bad solution especially in a poor GNSS environment. 59 seconds is nothing. if you look at the time stamp and return and be in the same exact location tomorrow at same exact time you might even repeat a bad position. This will depend on wind on those leaves’ moisture level in that environment. Anytime I mean anytime you are in a non GNSSS friendly environment time on station time between observations is a must. Redundancy redundancy redundancy is your only proof of the answer/results. You could have been getting a signal from a spoofer as well. Lots of things we have no control over even in a non hostile environment. I know several surveyors back in the 2006-2008 time frame before the cracked down on some of the jammers and spoofers that truck drivers and others where using that had issues along and within medians of interstates even in the wide-open sky on some shots just being whacky. I can remember doing static work in the 90’s that if you keyed a walkie talkie it would blast the signal and you could see it in the file when processing. The antennas have all become so much better at listening to the correct signals for sure and the receivers and firmware are so much better. Also the satellites broadcasting and updates to them have become better as well.
One other thing I have seen is you picked up a position from a different base than your own if using a radio. If on a network like a VRS system, then the modeling could have had issues or a latency issue on the network themselves.
I have no issues pushing into canopy but when I do I build a good redundancy process and sometimes you just have to look while on station at the signal to noise ratios along with the azimuth and elevation of those satellites it is getting a signal from. Take a compass clinometer and sometimes you can know that a sat is not given the right answer as its opposite of where it says it is from, or you can be like no way i am truly reading a direct signal from that area at that elevation as it is in the surface of the earth a hill whatever and so you know it is a reflective signal.
I notice you are set to DRMS and I will say that 95% of the time you will fall within that realm of precisions in a good environment. Once you get into canopy or other areas like metal roofs etc then that confidence goes down. If that receiver kicked out of the fixed solution at 59 seconds, then I would say it had figured out it was a bad solution. Again I have stated this here. From my personal testing in all sorts of areas if you have at minimum of 22 sats and can stay fixed for a minimum of 180 epochs without it bouncing on the precisions the odds are more in your favor of having a result that falls within the 95% confidence not DRMS of your precisions., So double those DRMS in your head roughly that’s about how well you will position that point most of the time. But in hostile area you better go back and a couple hours later and try again. Another quick check not often mentioned here like the cover the receiver or AKA make it lose lock and re resolve is to putt something in the ground temporary say 5 to 10 feet away observe that and yank a tape. inverse those two points. make sure it moves to a direction to get past the same trees around you etc. you move just enough to force a different multipath environment. I will often stake out the point I just located and move a distance and let it set for a few just as a quick check to see if it matches what a tape. I usually do this on my first run through to see if i need to plan to traverse or whatever later if I see a huge issue.
Now saying all of that. Is it possible you have two points in the job or a csv that are identical but with different coordinates as that can cause issues staking out as well. They do not necessary have to be the same delta difference. I had a crew chief who had accidentally linked to a job from a different site and 2 point number the same but different coordinates that he was fighting trying to find some control once in the woods. Also once he found a nail, but we went back and found a different nail not far from the one he found with gps in the woods. Yeah that was bad as they set up on the wrong one back site to a correct point. The distance between the two where .02 ft and .03 vertical which was nothing for a rural traverse a few years prior. That was a head scratcher for me as I sent them back out and said look go to the first two points and do a sweep of 25 ft around them a good sweep as another point is there. Yep it was and they set up turned it in and traversed through a few others as a good check and all worked.
Sometimes its just a blunder not a receiver or equipment issue.
-
According to the screenshot, this is an RTK vector only, and has zero (0) epochs under the Observed Data.
Trimble (Spectra) RTK needs at least two (2) epochs. You can set it to one (1) (but it still uses two (2)), as a check for the statistics.
Nothing on the screenshot indicates 59 seconds of data.
You might have been fixed with good numbers, then lost radio, and hit store/auto store.
I’ve seen the zero epochs thing before, but usually it warns you when you try to store, etc.
-
That’s odd. It was definitely measuring for 59 seconds (see new screenshot). Maybe even though it was fixed, the precision was above a set threshold and all measurements above that threshold were not used?
This brings up another question of mine, under what circumstances would you want to use a different “number of measurements” vs “occupation time”. For example observed control point uses a 180 second duration and also 180 measurements. Why would one want to have those numbers not match?
Again, I’m not concerned about this particular point. More so the theory behind what causes the data collector to show that a measurement has “fix” with fairly decent precision but is shifted way out of place.
- This reply was modified 2 months ago by bc-surveyor.
-
I’ve had very few bad shots, like once every couple of years, and every one was because I heard the beep and thought it had stored the point, but in fact it was a warning beep to tell me it had lost lock. A few seconds later as I was walking away it had acquired lock and stored the point, as I later realized.
-
Well, after further review, your RTCM age is 1.000 seconds, so the rover does appear to have good data link.
Your precision threshold theory seems most plausible. You can “start measuring”, but the RTK epochs (measurements) to be used won’t start populating the solution algorithm until the minimum precisions are met .
Not sure if you are using TBC (Access) to get the sreenshots of the Properties, but it looks like it you are. Do yourself a favor and switch the Precision Confidence Level reporting to “Scale to confidence level = 95%”. Technically, DRMS is slightly different than 1-sigma, but if you can display 95% directly, it beats trying to discuss such matters with staff that don’t understand what it is anyway. And then everyone is on the same page.
Much of the literature about RTK uses your stipulated definition of the term “fixed”, which is the legacy description of the concept, as far as Trimble is concerned. There was a thread not too long ago about what RTK is actually doing/how it’s measuring, and it surprised many that a 3 minute RTK “observation” was storing only 1 position solution.
GNSS positioning (RTK/Static) is all about time on a point, and changes in the satellite constellation(s). Trimble thinks that being “initialized” (fixed, but not really) for 3 minutes on a point with RTK should be enough time to get the best available precisions for a given set of site observation conditions such that the position solution satifies some set of criteria deemed to be a “successful” position acquisition. (Repeatable, accurate, precise to some statistical metric.)
Arguably, you would want the time on point to coincide with some finite integer number of measurements (epoch interval). Trimble (most?) RTK base position pulse is 1 second intervals. And typical (good) RTCM age is 1 second (or less). So it makes sense to set RTK measurement intervals (epochs) to 1 second so as to get the maximum number of measurements for a given period of observation time. This scenario gives the algorithm the most amount of data (measurements) to resolve (compute) the best position solution.
I think many people confuse (comingle) different aspects of the methods (RTK and static) between the two styles of surveying . Logging data at the base/rover while performing RTK surveying is one instance where I can imagine these mismatched settings come up. Perfect “survey style” settings for static surveying may not be the same for RTK surveying. Which is why you can create and configure them separately.
-
“Norman, can you expand on that a bit?”
It’s more or less as Bruce described. In the brief moment between deciding to press Record and the thing actually recording the fix is lost and a false reading gets recorded.
-
I tend to see it in canopy situations and also near fire stations. I don’t know what the deal is with fire stations but I assume they have a gnarly radio that somehow causes problems. I can sit within a couple blocks a fire station and take multiple 1 minute shots and have them read 0.10-0.15 different almost every time.
-
Thank you all for your well thought out and detailed responses! They were all very much appreciated.
-
OleManRiver…providing a field guide to not screwing the pooch when using GNSS in cover. I agree with the whole thing.
-All thoughts my own, except my typos and when I am wrong. -
DMY, would it be more appropriate to say that the receiver had fix (it determined a whole number of carrier cycles) albeit an incorrect one? I would suggest this because I would assume the receiver would know if is using a whole integer or not. If this is the case, my question would then be, how would we know how many “distance measurements” were being influenced by incorrect ambiguity resolution? Would it only take 1 to throw the result out by 6′ in each axis like I saw here? Can the receiver not determine that the calculated position wouldn’t really be fitting very well with the other “distance measurements”?
Of course in the real world I wouldn’t just take one shot in these conditions and go on my merry way. I’m more so trying to understand exactly what is happening in the back end of the coordinate determination.I cannot run you through the math. I can tell you that I have been told by multiple people that can run through the math how to mitigate the inherent dangers of GNSS in canopy.
But regarding a “fix”, that is just a word on a computer in your hand, and garbage in = garbage out. And we are the ones putting in the garbage. We set the fix parameters, when and how it stores, etc. “Fix” is a word that is essentially a marketing term by the time it gets down to the end user. Yes, it has actual meaning, but by the time it gets down to the word on our data collectors, it means what we say it means.
Your questions are about the processing is what a certain purveyor (RIP) of GNSS surveying equipment was trying to answer with his V6 solution, from what I understood. Unfortunately, his personality combined with the industry going into harden silos kept that discussion out of the mainstream.
-All thoughts my own, except my typos and when I am wrong.
Log in to reply.