Activity Feed › Discussion Forums › Strictly Surveying › Who'd like to go compare a R10 to a Javad?
Who'd like to go compare a R10 to a Javad?
stlsurveyor replied 6 years, 4 months ago 44 Members · 162 Replies
Mark Silver, post: 452802, member: 1087 wrote: Driving out with our robot is not out of my comfort zone (my parents used to live just up the road), it is only 21-hours. Would you be willing to commit to a longer robotic test with a maximum of 4 heads? Can you get NMEA GGA and GST sentences out of the LS at 2 Hz or faster?
I hope to see you there Mark!
Bring your spinning thing!What if….
You were to tie in a point or ten under canopy with a total station to conduct your test on?That will be the method I plan on.
Mark Silver, post: 452837, member: 1087 wrote: Btw, does every shot take 3 minutes, or just the ones under canopy?
In canopy we recommend operators use a profile that runs for three minutes. It may actually last longer, but at a minimum three minutes. There are several fixes gathered at the beginning of the shot and then a final fix at the end of the shot. If the LS is working in difficult canopy, it may actually require more than three minutes because of the number of fixes the profile requires and the time needed to acquire these numerous fixes.
In the open, or under light canopy, the operator is free to choose a profile that is much shorter with fewer fixes to acquire.
I will say that even in the open, there is a slightly improved precision that comes from a longer occupation. It’s been a while since I performed this test, but I once found that a four minute observation would reduce random errors in position by about 15% over a 30 second observation. So there are some benefits to a longer observation, even in the open. But the primary purpose of the longer observation for working in canopy is to give the satellites time to change (even if slightly) to vary the effects of multipath, making bad fixes more easy to identify.
and then there is a guy named John, who prefers to collect 5 points per monument, each with 10 re-initializations per point, and then average. Sometimes this takes less than 180 seconds in the woods, sometimes 20 minutes. In the open, I get through the 50 re-initializations, and 225 additional non-reset epochs in around 45 seconds.
As long as your strategy is sound, there can be more than one way do do it.
Here is a test I would like to see.
Go randomly put out 20 or so points mag nails or whatever along a trail through the woods with lots of canopy spaced maybe 50′ apart.
Time each person to see how long it takes them to survey all of the 20 points to a confidence level they feel comfortable with with their instrument.
When all are finished shoot them in with a robot.
Compare results on time it took and deltas from true position.
Drilldo, post: 452949, member: 8604 wrote: Go randomly put out 20 or so points mag nails or whatever along a trail through the woods with lots of canopy spaced maybe 50′ apart.
Time each person to see how long it takes them to survey all of the 20 points to a confidence level they feel comfortable with with their instrument.
[Snark On]
How will you account for differences in constellation? One receiver starts the round at a time of high SV count (say 22 SV’s), another receiver starts at a time of lower SV count (say 14 SV’s). The second receiver will loose, big time. You won’t be able to randomize the occupation order because you are willing to wait 15 minutes (or longer) for some of the receivers to fix. The slow receivers will be bottlenecks on every point they occupy.
One round will not be sufficient because we have already found that some receivers are waiting for significant changes in constellation as part of the algorithm.
I think you should pick one spot in heavy canopy. Setup a bar with 4 quick connects reasonably spaced. Start each receiver on a post. Wait a predermined time, say 5 minutes, but pick the time in advance and hold it the same for every receiver. Store a 15 second average value at the end of the occupation. Rotate all receivers. Repeat 300 times. Look at the range and standard deviation of X, Y, H, dXY and TTF of each receiver on each peg (75 occupations on each peg.)
Since you don’t care about TTF, allow every receiver to cook just as long as you would the slowest receiver. Assuming you never make a procedural mistake (doubtful if you have beer) you should be able to complete a test in less than 13 hours. Repeat the test 4 times to develop confidence in the the procedure.
When one receiver is slow to fix, the other receivers under test are also slow. And the times when fixing is slow or receivers fail to fix, come and go. The bad periods usually do not last for more than 20 minutes.
The thought of allowing a receiver to cook for ‘as long as it needs’ to proffer a result is mind-boggling. I suppose some would cap the wait at 24-hours? I feel that 3-minutes is excessive for testing. Just code the round as a ‘NO FIX’ and move on to the next round, it will be interesting to see if another receiver will fix at this time.
What if you run a test on a single peg with a 180-second maximum wait and the results look like this:
[INDENT]-ROUND 1-
A NO FIX
B TTF 35 seconds, dXY 0.012 cm
-ROUND 2-
A NO FIX
B TTF 27 seconds, dXY 0.011 cm
-ROUND 3-
A NO FIX
B TTF 42 seconds, dXY 0.004 cm
-ROUND 4-
A NO FIX
B TTF 17 seconds, dXY 0.014 cm
-ROUND 5-
A TTF 175 seconds, dXY 0.008 cm
B TTF 14 seconds, dXY 0.008 cm
-ROUND 6-
A NO FIX
B TTF 25 seconds, dXY 0.011 cm[/INDENT]Some of us (like me) will look at the results and say “I would prefer to spend my day with B, it is reasonably fast and reasonably reliable”.
Others will look at the results and say “Well, clearly if we allowed A 30-minutes for each round, then A would have eventually fixed AND based on it’s performance (of the one solution), B is MUCH MORE accurate than A. I would like to spend my day with B.”
[Snark Off]
Mark Silver, post: 452970, member: 1087 wrote: [Snark On]
How will you account for differences in constellation? One receiver starts the round at a time of high SV count (say 22 SV’s), another receiver starts at a time of lower SV count (say 14 SV’s). The second receiver will loose, big time. You won’t be able to randomize the occupation order because you are willing to wait 15 minutes (or longer) for some of the receivers to fix. The slow receivers will be bottlenecks on every point they occupy.
One round will not be sufficient because we have already found that some receivers are waiting for significant changes in constellation as part of the algorithm.
I think you should pick one spot in heavy canopy. Setup a bar with 4 quick connects reasonably spaced. Start each receiver on a post. Wait a predermined time, say 5 minutes, but pick the time in advance and hold it the same for every receiver. Store a 15 second average value at the end of the occupation. Rotate all receivers. Repeat 300 times. Look at the range and standard deviation of X, Y, H, dXY and TTF of each receiver on each peg (75 occupations on each peg.)
Since you don’t care about TTF, allow every receiver to cook just as long as you would the slowest receiver. Assuming you never make a procedural mistake (doubtful if you have beer) you should be able to complete a test in less than 13 hours. Repeat the test 4 times to develop confidence in the the procedure.
When one receiver is slow to fix, the other receivers under test are also slow. And the times when fixing is slow or receivers fail to fix, come and go. The bad periods usually do not last for more than 20 minutes.
The thought of allowing a receiver to cook for ‘as long as it needs’ to proffer a result is mind-boggling. I suppose some would cap the wait at 24-hours? I feel that 3-minutes is excessive for testing. Just code the round as a ‘NO FIX’ and move on to the next round, it will be interesting to see if another receiver will fix at this time.
What if you run a test on a single peg with a 180-second maximum wait and the results look like this:
[INDENT]-ROUND 1-
A NO FIX
B TTF 35 seconds, dXY 0.012 cm
-ROUND 2-
A NO FIX
B TTF 27 seconds, dXY 0.011 cm
-ROUND 3-
A NO FIX
B TTF 42 seconds, dXY 0.004 cm
-ROUND 4-
A NO FIX
B TTF 17 seconds, dXY 0.014 cm
-ROUND 5-
A TTF 175 seconds, dXY 0.008 cm
B TTF 14 seconds, dXY 0.008 cm
-ROUND 6-
A NO FIX
B TTF 25 seconds, dXY 0.011 cm[/INDENT]Some of us (like me) will look at the results and say “I would prefer to spend my day with B, it is reasonably fast and reasonably reliable”.
Others will look at the results and say “Well, clearly if we allowed A 30-minutes for each round, then A would have eventually fixed AND based on it’s performance (of the one solution), B is MUCH MORE accurate than A. I would like to spend my day with B.”
[Snark Off]
While I agree with everything you are saying I am just trying to get at a real world productivity test. If you only have 2 receivers you are testing and 20 points that are only 50′ apart and you turn them loose at the same time I would think they would have reasonably similar constellations and they could jump around as much as needed between points. A slow receiver would only occupy one of the 20 points at a time.
Maybe you do it two ways. Once where you take as long as you need and once where you get 5 minutes max per point and if you can’t fix it is noted as a failure.
I would just like to see how fast and accurate the two are in a production environment.
Drilldo, post: 452978, member: 8604 wrote: I would just like to see how fast and accurate the two are in a production environment.
Exactly.
In the open, this isn’t difficult. All the tests that Mark recommends are fine for testing two receivers in the open. These days, I don’t know that there is a lot of difference in receiver performance in the open. The real test for most surveyors is in those unfavorable places many of us find ourselves in most of the time.
No credible surveyor is going to work from a single fix under canopy. There is some systematic process that is used in the field to reach some confidence that the point is good. So what is the recommended procedure for collecting a point in canopy for the receiver being tested? This procedure should be part of any test performed under canopy. The manufacturer’s recommended procedure for point collection in canopy should be built into the test in some way as a single fix simply doesn’t tell the whole story.
I’ve never used an R10. I understand from Trimble’s literature that Trimble does not display fixed and float anymore in the interface with the R10. For those familiar with the operation, is point collection complete once the indicated accuracy of the solution reaches the user’s specification? Whatever the metric is, the R10 should be given the opportunity to reach it in the test – basically allow for whatever Trimble’s recommended procedure is for collecting a point in canopy.
For the Triumph-LS, this is fairly simple. Set the action profile to Boundary. Set the receiver on a tripod in some environment to be tested. Set the base nearby in the open. Set the LS settings to Accept Automatically and Auto-Restart to Always. Press Start and walk away. The receiver will then begin collecting a point, with automatic resets of the engines as part of the process at the beginning and end of the shot. Once the point collection is complete, it will automatically store the point and begin collecting a new point, repeating the process ad infinitum, until the receiver runs out of power, which should be no less than 17 hours later if begun from a full charged. I’ve performed several tests just like this. Any user can do the same as these settings are in the software. After some period of time, export the points into a spreadsheet and compare them. Actually the LS has evaluation tools in it to compare numerous points on the same point for extreme spread at various percentages (i.e. 60 percent of the points collected have an extreme spread of x Horizontal and y Vertical from the mean).
If the R10 is capable of doing this, I would set two points in the woods, close to each other. Set up receiver A on point 1 and receiver B on point 2. Start collecting, using the manufacturer’s recommended procedure. Do this for some period of time (2 hours or more). Then switch the receivers. B is on 1 and A is on 2. Start collecting. Limit this to the exact same time as the first occupation. Then switch back to A on 1 and B on 2. Collect for the same amount of time. Switch again, B on 1, A on 2. Repeat as may times as is practical, but at least give each receiver two occupations on each point (more is better). Make sure weather conditions are the same. Particularly moisture. Avoid periodic rain or dew as the moisture on the canopy will change the difficulty for the receiver considerably.
At the conclusion of the test, see how many points each receiver was able to collect on each point. Did one collect more than the other? Compare the individually collected points to the average of all points collected on point 1. Are there any gross outliers (results of a bad fix)? Finally, what is the precision of the points collected (statistical RMS of the points collected on each point)? The result of these three questions may point to a better receiver.
The more this test can be repeated, the more reliable the results of the test will be. More than one receiver can be tested at a time, but each receiver still needs to occupy each point a minimum of 2 times, preferably much more. For 2 receivers the minimum time needed will be eight hours. For 3 receivers the minimum time needed will be12 hours. Repeating the test two days in a row would be very good. If this is done, switch which receiver is A and B. So that yesterday’s receiver B is occupying the same points at the same time of day that receiver A was on the previous day. The constellation should be similar from one day to the next.
This of course all hinges on the manufacturer having some fully automated approach to collecting a point in canopy.
I do not know if the R10 has a algorithm to sit, in canopy, and cook, unattended. Last discussion I had with Trimble rep was maybe 2-3 yrs ago. He said RTK “is not for canopy”.
It was my plan to discuss the methodology with mr stlSurveyor, and since he has experience with his gear, and I don’t, to let him pick his methodology.
The overall goal of a surveyor is to get reliable data under all circumstances, and as many shots as possible.
It seems that the overall design of the R10 and it’s coresponding data collector may not be targeted towards the acquisition of data in canopy. I may be wrong about this.
I know that with the boundary profile above, that the Javad IS designed to work in canopy.
So, the overall design of these units *may* not. Be the same.
I’m hearing rumors from some, that Trimble is “best at all things”. I want to see for myself.
Surveyor dollars are worth 2x of the other kind….
Anyway, we want a simple, customer point of view, comparison. (which is essentially, who gets the:
Most data, tightest tolerances, fastest time)
For Mark Silver, and anybody else, who wants to participate, why don’t you call up Shawn Billings, and go do your ideas.
I think that’s wonderful. No snark.Nate
I agree an academic approach would have a lot of merit. But is there no merit in “this receiver got the shot, this one didn’t”? I’m assuming the setups are similar – and that they’re being wielded by people that know how to use their equipment.
Aren’t there enough SVs visible that if one receiver got 75% more locations in ONE field effort it would indicate superior performance?
Also, I’m not expecting that. Our R10s a working like champs these days. I also have a GS16 setup in another office and no complaints there, either. Picking a winner between those two would be quite a toss-up.
A total station traverse through the ground points is a good idea, but not entirely necessary, in my opinion. Some will argue this view, which would probably make the total station survey advisable, as the test will be viewed as more credible. In my opinion, the redundancy of the results themselves will provide enough data for a suitable control comparison.
I would recommend treating each rover as a part of a system, using the manufacturer’s base to provide the corrections. An RTN may be used, but I don’t believe it would be the most compelling test. Some RTN’s will not provide the multi-constellation data that some receivers would otherwise use in base-rover, knee-capping that rover unnecessarily. Provided the sky view at the base if very good, the bases could be setup within a few feet of each other and should provide corrections of similar quality to their respective rovers.
gschrock, post: 453098, member: 556 wrote: Oh, and side note; yes nearly every rover I??ve tried accommodates various types of ??cooking? for extended periods, the question is at what point does it become diminishing returns.
For most surveyors working in harsh environments with RTK, the question is “What is the earliest point during collection at which I can have confidence in the result of the coordinates being given to me by the controller?” The question is not, “Have I been here too long?”, the question is “Have I been here long enough?” I don’t have enough recent experience with other receiver/controllers to know how they address this question or if they address the question at all. Most of the receivers I have used leave this question to the operator to answer. Javad has been working to implement automated processes that let the user know when the observation should be successful. I think testing the receivers in canopy needs to address capability (can it produce a fixed solution here?) and reliability (is it a good fix?).
Interesting thread, whatever test method you guys choose, it will be interesting to see the results. Even if you come back with a general “yep both receivers performed about the same or not significantly different”
Drilldo, post: 452949, member: 8604 wrote: Here is a test I would like to see.
Go randomly put out 20 or so points mag nails or whatever along a trail through the woods with lots of canopy spaced maybe 50′ apart.
Time each person to see how long it takes them to survey all of the 20 points to a confidence level they feel comfortable with with their instrument.
When all are finished shoot them in with a robot.
Compare results on time it took and deltas from true position.
I think this method might be a good balance of effort and results without getting too intense or complicated. Maybe if you leapfrog the receivers down the trail instead of doing one receiver at a time, it can mitigate some of the changes in constellation. Will be interested to read whatever you guys come up with though. Using a Total Station as a check might not be necessary for a scientist but the average joe would probably like to see that data also
gschrock, post: 453122, member: 556 wrote: I am puzzled why folks would be resistant to breaking out the total station and setting up test points for GNSS testing…. surveyors are supposed to independently verify things…
It invites another error source and more complexity. How was the traverse done? What equipment was used? Was the equipment in good order? What control was used for the traverse? etc. I’m not opposed to a good control group that a total station can surely bring. I’m hesitant to introduce the overhead of additional complexity with very little additional benefits. But, for the sake of credibility to the results, I think a traverse would be helpful.
gschrock, post: 453127, member: 556 wrote: And I am confident that good surveyors can execute a proper closed traverse…
But only if they use the brand of equipment I endorse…everything else is crap 😉
Not really, James.
With independant, fixed, (so old school!) observations, over a time period, you can know. In fact, you can do this with an old set of l1 gear, if you take the time.That still would not provide the same conditions. Multipath can be very specific. Move over a foot and conditions are different.
The only way you can truly test the same conditions at the same time is to use an external GNSS antenna split to two receivers. You won’t be able to draw any conclusions about the quality of different brands’ GNSS antennas this way but you get a very good comparison of their processing engines. Unfortunately Trimble R10s do not have an input for an external antenna.
gschrock, post: 453185, member: 556 wrote: what is the alternative that would provide closer to the same conditions at the same time?
The crossbar would seem to be suitable for wide-open areas, but in obstructed or multi-path sites Mark Silver’s vertical rotisserie would seem to be a better solution. Both antennas would occupy the same spot, the only variable being the slight change in constellation between short sessions. Not perfect, but with enough repetitions I would think the advantages/disadvantages would mean out. It gets more complicated when comparing more than two receivers, though.
gschrock, post: 453185, member: 556 wrote: But much closer in same-same conditions than doing them at different times (constellation has changed.) , or 50 feet apart as some have suggested.
No single series should be used used. The multipath profile changes as the satellites cross the sky and deflect off the static (or waving in the breeze) multi-path hazards. … at exactly the same time and as close together as possible is the closest you’d get to same-same. The cross bar should be rotated, and the order changed in repeat tests. If this is such a bad idea, why have the academic tests use a crossbar?But, open minded on this: what is the alternative that would provide closer to the same conditions at the same time?
I don’t see any trees in your picture.
Log in to reply.