GPS Receiver Test
If one wants to look at some test (newer that FGCS) just type in the above (GPS Receiver Test) and you should get many hits.
Look for an interesting test done by the Delft University of Technology, A. Amiri-Simkooei,R. Kremers and C. Tiberius, 33 pages.
JOHN NOLTON
Nate The Surveyor, post: 400230, member: 291 wrote:
I have been trying to figure out a GOOD test.
Post or tell me your ideas, and lets try it.
Nate
I reckon you should design a practical test. Perhaps something like:
for a test with 4 receivers/antennae, setup 4 tripods. 1 under canopy, 1 edge of canopy and 2 in clear. Survey with high accuracy TS to determine true coords of the points to be occupied (and the base units). Set all GNSS units on individual tripods to average position for 5-10 mins. Rotate all units clockwise to next tripod. Average position for 5-10 mins. Rotate units again, and so on and so on. Do this as many times as your stamina will allow. Ideally you'd have most of a day's worth of data under favourable constellations and marginal constellations.
I think this test would give you a great looking point cloud on each point that would make direct comparison easy and meaningful.
Why do you need to test for accuracy when you never moved any monument that is out of place ayway.
Ah ha ha ha I love you too, Francis!
Say, do you like to fish? I do. But, I don't do it enough.
_____________
Back on topic, I was taught that you "use the tools you have, before you acquire more".
There are 2 fundamental errors that rtk gps is prone to. Those errors are typically those greater than 0.15', (bad init), and those less than 0.15', (good initilize, sloppy shot)
Multiple shots on a given point, can reduce sloppy shot to less than 0.05' most of the time.
Sloppy shot, when there is good satelite configuration, and good sky view, is typically less than 0,05', and often around the 0.02, 0.03' area.
(we are also talking about a NEARBY base station)
So, in my experience, sloppy shot is MORE, random, depending on time of day, and how much time is spent on a given point.
You can also observe sloppy shot, simply by setting us the rover, with a bipod, and making multiple observations, of the same point, without moving.
This is also how you get higher accuracy, out of rtk, in canopy.
Using primitive methods, ie, my methods, we should be able to FIND, and quantify both kinds of error.
For my purposes, we are looking for bad initilize, primarily.
I already know how to get this down to a tighter confidence, simply by multiple shots.
The main point here is HOW MUCH TIME, it takes to acquire 100% confidence, for 1st bad init errors, and second, to tighten the slop, of good init, but to get confident, that you have less than 0.08', or 0,05', or even tighter. Were talking horizontal accuracies here.
The methods outlined above, by others, ie 3 or 5 recievers, on a board, mounted on tripods, or signal splitters, serve a purpose, but it's not the one I'm after. They will reveal errors of construction, and such like.
There is a primitive way to study that too... Set up for 24 hrs, static observation, then rotate 180 degrees, and repeat. Then study the point clouds.
Anyhow, lots of testing can be done, with the tools we all possess.
I'm promoting doing what we can, with what we've got. And, I'm saying that we SHOULD already be doing this, on a regular basis.
Good practice.
I'm certianly not opposed to a MORE scientific approach.
But, I am saying that there's alot we can do, that is less scientific, but will help us to:
1.) check our gear, methods, and procedure.
2.) allow us to quick compare various brands, configurations, and designs.
3.) help us stay in touch with the bigger picture of technology.
4.) learn more about the science, laws, and mechanisms, of this oblate spheriod, called earth.
5.) help us "stay in touch" with our "error potentials".
These are my ongoing goals.
I can see the "handwriting on the wall", about technology eventually delivering 1k-1cm, gps recievers, to this planet, in my lifetime. Meaning, for 1000 dollars, or less, we can get gps recievers, that can consistently give us one cm accuracy, or better. (horiz accuracy)
Now, the interface, and USE of this data, will always be a surveyor's domain.
Time waits for no man.
We should all desire to stay in touch with technology.
I do.
Nate
I agree. I don't think a splitter is appropriate for this sort of test. I would think this would be a system test, not just a receiver test. After all, most would want to know which system (if any) performed better, not just one component of the system (ie receiver).
I would also be inclined to use a base that is from the same manufacturer. It should not matter a lot, but I suspect each manufacturer will perform best with a base from that manufacturer. RTN corrections would be a possible exception, particularly if the test is which receiver performs best in a particular RTN.
Mark Silver has a very good test under canopy. Two mounts within a couple of feet of each other. Set a receiver on each mount, record a point, then switch and repeat. Between switches he physically dumps the antenna (flips it upside down). He does this several hundred times. This guarantees that even in the space of two feet that both receivers are compared in the same place and even though conditions are never 100% identical, the mass of points on each point should present a clear winner. I don't particularly approve of dumping the antenna, at least not without some software reset to clear the engines, but otherwise, I believe it's a solid test.
In the end, a simple ASCII export of the points, brought into a spread sheet can be used for statistical analysis of the point cluster of each receiver on the two points.
So, Mr Schrock, who pays you to do your testing? Who pays your bills?
That is something of considerable interest to me.
It can be time consuming, and such. (Set up a good testing facility).
We all want the best gear. That is my motive.
Nate
Nate The Surveyor, post: 400369, member: 291 wrote: So, Mr Schrock, who pays you to do your testing? Who pays your bills?
That is something of considerable interest to me.
When it comes to bias, it doesn't get much more none bias than Gavin.
Gavin helped me put on a day and a half class a year ago; we had Trimble, Topcon Leica and Carlson representatives provide instruction; as well as Gavin, Larry, Jill and Mark. All well versed in all things GNSS, GPS and RTN. Gavin set up several points and everyone got a chance to occupy and compare results. It was good to see everyone sharing and comparing. It wasn't a competition; more of a collaboration. Everyone had a good day...
Dougie
Ok, back to the subject. Who wants to go do some side by side testing?
I'm still interested.
Dougie, that's what I'm talking about... Except back in the trees with the rover, and base in the clear.
N
Nate get it set up for the Spring Conference in Arkansas, maybe Mark and Shawn would attend. That a way we would at least have a couple of guys there to talk to about this stuff, we might actually learn something useful and we would get CEs for it.
I guess I will add a line or two to this thread.
Comparing RTK is tough in the real world.
- You only need to test under canopy. They all work great in the open. (And if they don't work great in open sky, they will crater under canopy.)
- You have to test receivers side-by-side at exactly the same time. Not a day later, and not 2-minutes later. The same time. And small position changes can make a huge difference. So I test up to three receivers on quick connects under a big tree, right next to a tall building, under big power-lines, swapping positions and physically dumping after every shot. (Some don't like to physically dump, but that is what happens when you are working in the woods.)
- You need to be VERY careful to not under-sample the comparison. So I prefer to work with rounds of 200 shots (if I am testing two receivers I will have 100 shots on each of two mounts.) It can take hours, but you need to test them under a series of constellations, not just a cherry picked moment. Usually after 50 shots, you know which receiver is faster fixing. After 200 shots, you can look at the range of elevations and know which has better solutions.
- What are you evaluating?
[INDENT]
- Time-to fix is important, I will wait up to 2-minutes for a fix, after that I consider it a no-fix and continue on. The range of results (true range, not SD) of the elevation seems to be a tell-all. But if you are comparing two receivers and one consistently gets good fixes in 15-seconds, the second takes over a minute, then the first receiver is clearly more productive.
- Number of bad fixes: all these GNSS engines will get bad fixes. Typically it occurs with low SV counts. I expect 1 bad fix per 100 occupations under very heavy canopy. Again, a bad fix nearly always results in a stored elevation that is 0.5m high or low. If I see more than a few bad fixes in 200-shots, then I am concerned. (In the real world, I can mitigate this by store-dump-stake-dump-stake.)
- Are you willing to wait 5 to 20-minutes for a solution? On a 200-shot round that is going to be a significant investment in time.
- Are you testing with matched base-rover pairs or a rover in a network? Do you care about mixing base and rover brands (you may need to if you have a community base)? If you are testing single baseline base-rover pairs are you using 3-meter, 300-meter, 3-KM or 10-KM baselines? If you are testing performance in a network, do you need to evaluate the performance in Trimble, Leica, Topcon and Geo++ networks? If you are testing in a network, do you test at a location with a nearby station (30 KM)?
[/INDENT]
- (really 5.) Do you have the latest version with the latest firmware for all the receivers under test? From my experience every subsequent version is twice (or more) as good as the previous version. As an example, each major revision of firmware for the SP80 has been way better than the previous version. It has been like purchasing a new receiver, only all you had to do is a 3-minute firmware update to triple the performance. Same thing for the Trimble GNSS firmware on the modern boards.
- (6.) Are you going to publish the results? If you do, and the manufacturer is thin-skinned then they might sue you. In the case of some us here, if we say anything bad about a manufacturer they might drop us as dealers or refuse to purchase additional future advertising from the media we work for. (I have personally been threatened by one of the major manufacturers with a personal suit.) And fears of reprisal is why I don't write even innocuous articles for TAS anymore. The manufacturers are so controlling that you can't say hardly anything without pissing them off. Even Wendell needs to be cautious about allowing these threads devolve into a troll-fest because at some point one of the manufactures might refuse to purchase a $300 banner advertisement.
- (7.) How are you going to compare a $12,500 pair of receivers against a $70,000 pair of receivers? (I personally think you should expect the same performance from the lower cost receiver as the most expensive receiver.) Does the warranty count? Do you factor in support and service? How about receiver's physical robustness?
- Does picking a GNSS receiver mean you can't mix it with the robot that you want? Is that part of the comparision?
- (8.) Are you okay with spending all this time on the comparison, and then repeating it again the next year?
- (9.) Testing under my tree might yield completely different results than under your local canopy. That is why my answer is always "lets send you a pair and you can try them out on your jobs." I really think there is a substantial difference in performance under tree canopy across the USA.
Item (8.) is kind of telling for me too. I think a reality of our industry is we have at least 3 (perhaps 6) really good engine manufacturers. And every year or two they leapfrog each other in solution quality. I am always going to want to update because I can be substantially more productive with the latest gear. That means I want to be in a position to purchase new equipment every 4-years.
Anyway, as always, this is more than a few lines...
Happy Thanksgiving!
M