That makes it a full time day. I can't put it on the top of my list but I'll try to do it in the next month or so. I'll use a good location for the base, probably one I've used before and have multiple long term static observations. I'll find a location about 1-1/2 miles away where I can have a good GPS location, a moderate location, and one with some trees nearby (probably junipers). I'll set up three tripods and fix a tribrach that won't be disturbed during the test. I'll set the three points so I can make TS measurements between them (probably these points won't be further than 200 feet apart, set a temp cp for the backsight about a 1/4 distant with two 3 min RTK). I'll take a 3 min RTK and a short RTK (5-15 epochs) at each point every half hour for the day regardless of the PDOP. I'll make a couple of 5-1/2 hour or longer static sessions between the base and the good condition rover point.
I can only use the Trimble TBC for the processing, I don't have StarNet. I will send you all the data so you can do the StarNet for comparison. I will take pictures of all the points so we can see the locations.
Once the data package is all done I'll send it to anyone who wants it.
> That makes it a full time day. I can't put it on the top of my list but I'll try to do it in the next month or so. I'll use a good location for the base, probably one I've used before and have multiple long term static observations. I'll find a location about 1-1/2 miles away where I can have a good GPS location, a moderate location, and one with some trees nearby (probably junipers). I'll set up three tripods and fix a tribrach that won't be disturbed during the test. I'll set the three points so I can make TS measurements between them (probably these points won't be further than 200 feet apart, set a temp cp for the backsight about a 1/4 distant with two 3 min RTK). I'll take a 3 min RTK and a short RTK (5-15 epochs) at each point every half hour for the day regardless of the PDOP. I'll make a couple of 5-1/2 hour or longer static sessions between the base and the good condition rover point.
>
> I can only use the Trimble TBC for the processing, I don't have StarNet. I will send you all the data so you can do the StarNet for comparison. I will take pictures of all the points so we can see the locations.
>
> Once the data package is all done I'll send it to anyone who wants it.
Leon, that'd be a very useful test, and I'd be interested in getting the data for analysis. I realize the time commitment involved is substantial, and that you can't guarantee that it'll happen or when, but thanks in advance for the offer!
I have a friend with a Utah TURN GPS account and network rover. I think I'm to far away and in a valley (only one station in the valley 30 miles away) where it might not be so good. I'd like to test that also. I'll see if I can also get that for the day. Then a comparison between single base RTK and network RTK could also be done. If it comes out OK I'll go ahead and get my own network account ($400 per year). Be nice to not deal with a base station when I'm in cell phone coverage.
Error introduced by additional distance between base/rover is due to different weather conditions at each location (base/rover). The GNSS signals propagate differently in different ppm conditions.
It is entirely plausible that 30 miles in southern Utah would not USUALLY introduce as much error as 30 miles in the pacific Northwest, for example.
The other network stations in the network are in other valleys with mountain ranges in between. I've been advised that this could cause some distortion. This little test might give me some feel for it. Other than the annual fee I have equipment and a phone that would be able to use the network.
I wish I could test JAVAD's new gear but don't know of anyone near enough to me to do it.
LR,
Email me. I know the people who can make that happen.
Shawnbillings@cablelynx.com
I do d, e, and f depending on the size and type of boundary survey it is. My f is when I'm tying into known and mapped positions along the boundary or I have some substantial conventional checks. Unfortunately much of the boundary work I retrace is nowhere near as good as even a network rover typically delivers under good conditions. 1:5000 is acceptable for rural boundaries under state law here and having 99.8% of shots under a tenth with RTK is going to give you far better results than the legal limit for conventional. For GPS the legal limit is: "neither axis of the 95
percent confidence level error ellipse for any control point or property corner shall exceed 0.15
feet." Again, even a network rover can easily get you better than that with normal, proper procedures.
> Sir, you are ignoring my position to create a debate.
No, you're focused on the *adjustment* and ignoring the estimation of uncertainties. There may be some least squares survey adjustment software that adjusts survey measurements and GPS vectors but doesn't compute uncertainty estimates, but I'm unaware of it.
The two points connected by one GPS vector problem isn't completely wacky. If you wanted a line for azimuth control, that is a pretty straightforward way to do it and you'd probably want to know what the uncertainty of the azimuth between the two GPS points would be. That answer would be most easily gotten by just running the vector through a least squares adjustment program like Star*Net and getting the azimuth uncertainty by having the software compute it.
Do you know of some other means of efficiently getting that answer from the GPS vector with its covariances?
f
depends on the situation
> f
> depends on the situation
What would be three typical categories of situations in which the uncertainties of survey markers positioned by RTK would figure in your practice?
Your words...
>Except it's assumed that the reason why least squares adjustments are typically being used in the first place is that there IS redundancy, i.e. is something to adjust.
Vern -
When you use the same point number for a point in Access it gives you several options as to how to deal with it (after warning you on the front end that the point number already exists). You can choose "Store Another" and the software will simply store the observation, or you can choose "Average" and the software will generate a coordinate record that represents the mean.
Every time you observe a point using the existing point number, the software displays the coordinate differences between the current observation and the current coordinate value, so you get immediate feedback as to how well it matches. If you don't want to include it in the average you can choose "Rename" and store it as a different point. If you do include it in the average, the software generates a new coordinate record for the averaged position (overwrites the previous averaged position record) and displays the new standard deviations for the N, E, and H.
Regardless of whether you choose "Average", "Store Another", or "Rename", all of the observations are stored in the database. If you do choose to average in the field, and later want to make changes (for instance, if you need to update your base coordinates) in TBC, you need to delete the averaged coordinate record that was created in the data collector (this is only a coordinate record, it has nothing to do with the observations themselves).
By the way, I'm not saying that any of the options is more correct than another one... it depends on your situation. I like the Average option because it gives me real-time feedback as to how well my individual measurements fit together, even if I don't ultimately hold the position that was averaged in the field. Clearly there are folks here that think RTK is invalid unless you run it through a least squares program... I respectfully disagree. Obviously you can't take a single measurement and depend on its being within the error estimates that are reported at one sigma on the data collector, and from what I've seen most of the posters here understand that. But I am fairly certain that it's mathematically impossible for RTK to yield two or more wrong answers to fully independent observations that agree with each other at centimeter level. Bad initializations, which are extremely rare with today's equipment (the stuff we use, anyhow) will typically generate a coordinate that is off at meter level... not too tough to spot. The whole point of doing an Observed Control Point is that a bad initialization should be detected by the processor as the satellites move over the course of three minutes.
Error from multipath, ionospheric and tropospheric delay, and obstruction tends to be a bit more insidious and is the error that you're trying to reduce as much as possible through multiple redundant observations. Multipath conditions are constantly changing, which goes back to my point of no two wrong answers agreeing at centimeter level.
In the end I guess it's whatever you're comfortable with.
Does your state have accuracy rules for GPS surveys and different rules for conventional surveys? Are there 6 sets of rules (conventional and GPS rules for each of urban, suburban, and rural)?
Interestingly, I've heard several people, both where I work and through the grapevine, complain that Trimble R10s "don't work" as well as their R8s. This has to do with the fact that the new processing engine produces more accurate one sigma error estimates than the old fixed/float algorithm, which tends to report overly optimistic error estimates whenever the solution is fixed.
> Your words...
>
> >Except it's assumed that the reason why least squares adjustments are typically being used in the first place is that there IS redundancy, i.e. is something to adjust.
Although, of course, there are cases where the error analysis that is bundled into the software can be useful without redundancy. They are trivial cases, but shouldn't be overlooked. I wouldn't have thought that this was open to question.
> Interestingly, I've heard several people, both where I work and through the grapevine, complain that Trimble R10s "don't work" as well as their R8s. This has to do with the fact that the new processing engine produces more accurate one sigma error estimates than the old fixed/float algorithm, which tends to report overly optimistic error estimates whenever the solution is fixed.
Yes, it isn't difficult to imagine that marketing reasons (read: "sales demo") favored keeping the uncertainties estimated by the RTK controller optimistic.
Simple Question for RTK Users/Enthusers - Diagram misleading
Although my comment may have no bearing on the point you are making in illustrating your two points and although I have very high regard for Ghilani, the diagram showing GPS set-up and 3-D relationships is very misleading and down-right wrong when referring to equations 13.28 in which the X/Y/Z's are Earth-centered Earth-fixed (ECEF) coordinates. The only place that diagram is applicable is at the north pole or the south pole because X & Y are in the plane of the equator and Z is perpendicular to the equator.
The diagram should show delat east, delta north, and delta up. But, it is true, that the 3-D distance from base to rover is the same in either coordinate system.
Simple Answer
Redundancy.
To Jim-
Just as a side comment, if you're doing single base RTK you are dealing with typical error specs of 1 cm + 1 ppm horizontal and 2 cm + 1 ppm vertical at the 1 sigma confidence level. For your distance of around 10 Km, this would mean you might expect a horizontal precision of 0.06' horiz. and 0.09' vertical at 68% confidence. 95% or 2 sigma levels would be estimated to be around twice that error. As you know, we typically do better than that - just saying what the GNSS manufacturers state in their specs. Lee and others have given good recommendations in regard to redundancy, etc.
Best,
Bill