Thanks, that makes sense. I wonder if there is a situation where storing as positions would be more desirable. It seems that storing a vectors results in the same initial solution but gives you more options should you decide to make use of them.
Thanks!
QC on Network RTK
>
> 0.10' in 100! If someone is not getting better than that with RTK/RTN then they are doing something wrong.
Uh, +/-0.10 ft. is the maximum 2-sigma uncertainty of relative position. That means a standard error of +/-0.05 ft. in the relative difference of positions, right?
If each position is independent of the other, that means that each should have a standard error of no more than 1cm total in horizontal components, right?
I have been thinking about this too, the problemn with storing vectors on a long term project, you start with an initial site calibration and you store vectors for the next few months say on a road stake out job, then the administrators of the vrs network decide that it is time to update their datums due to plate shifts or just new data becoming available. So they shift their vrs base co ordinates about 30 or 40 mm, now you go and redo your site calibration because of this, then all your previous stored points will shift..... Not good if they are your only proof of what you have done.
So what you have to do is start a new job file with the new calibration and that is a pain as all your old data is on the old job file.
I think that if I had stored the data as points instead of vectors, the new calibration would not have affected my old points.
In the 4 years that i Have used vrs, our network has had 2 shifts of about 50 mm in yx each time, this messes up your stored site calibrations as you have to shift them also . I have about 50 site calibrations stored for different sites.
This may only be an issue here as our cadastral datum from about the year 2000 is about 400 mm off the currant datum , mainly due to plate shifts I believe.
RTN is a brilliant tool, I once got a fix for a vineyard topo and the nearest base station was 125 km away, it was a good enough fix for a vineyard topo.
QC on Network RTK
> Yes. 0.01524003m = .05' USSF.
> Yes. And if someone is not better getting that in horiz when they inverse between two marks 100' apart then they and their RTK have been teleported back to the 1990's.
So, it sounds as if you don't disagree with my analysis above which explains why RTK is such a poor choice for many boundary surveying applications. The error analysis is fairly fundamental stuff, I'd think.
QC on Network RTK
> So one way of looking at this is, and I mean no disrespect, you do not know what you are talking about without some first hand recent experience with the tools you question.
Not really. The above data shows perfectly well that that method would likely not meet many accuracy specs for *relative* positioning, such as the ALTA spec and the Texas minimum professional standards.
The folks who want to use RTK for boundary surveying ought to lobby for looser standards if they want to use it in ways that won't meet present specifications.
I may be out of my element here. I have little to no experience with rtk/rtn. But from my understnading, a local accuracy would be the precision between adjoining points. Whereas the network accuracy would be within the large network.
If I understand the Trimble network software correctly, the system calculates basically a "best fit" location for a virtual reference station between more than one of the network antennae. That might give a pretty darn good network and/or global precision on the virtual station. On a 'local' precision I would expect the relative positions between local points to be very precise in one session. However, if you come back and do more work on another day, you would be getting a new "virtual" station under a different constellation. It could, conceivably, use different network stations for the calculation on the different day. So, my point is that if you are not using the same "coordinate" values from the previous day, you could have a lower 'network' precision on your virtual station relative to the virtual station from the different day. That would imply to me that you are no more accurate than your network accuracy since you are using a different "virtual" point on different days.
If I understand the "Leica" software, it is worse, because instead of a "best fit" virtual station, it uses the vector from the closest network station only; introducing that ppm precision from "virtually" far away.
If you were doing RTK from a physical monument and holding the coordinates on the physical monument, from day to day, it makes sense to me that you would have a more precise "local" precision even if you "network" precision is less precise than through a vrn. In boundary surveying, it seems to me, that the greater concern is the local precision. That is to say, that I want a higher precision between points 100' apart, vs. having a global precision that is higher, but a local precision that is no better than the global precision.
Okay, I reread that. I need to work on having a more succinct explanation. I might work on that and try again a little later.
Tom
QC on Network RTK
> I disgree, did I not make that clear?
No, you didn't. If RTK produces positions with standard errors of +/-1cm in horizontal components, that means that you have basically zero slack in the error budget if the specification is to be met that no error in the relative position any two marks positioned will be greater than 0.10 ft. = 3cm. That's not a great place to be to have zero slack.
QC on Network RTK
> Oh for crying out loud.
>
> Go out there and measure and look at the resultant inverse.
>
> I have, and regulalry do on an NGS baseline over 1000' long and have comfortably been under 1cm H over that distance, repeated over years.
I trust you aren't suggesting that two RTN RTK positions taken over a short interval of time are as independent as two positions taken on different days under different constellations. The former would be a very optimistic test of performance. Now, if you want to position one baseline monument one day and the other the next day under a different constellation, that would be closer to surveying reality.
The most direct analysis is surely to simply derive the statistics for positioning from many repeat occupations on different days and constellations (as presumably the manufacturers do in claiming an RMS error of, say, +/-1cm as the spec Liz quoted does). With that value you can compute the expected uncertainties at different confidence levels between two independently derived positions using similar methods.
I see there has been much discussion about this and as always we enjoy seeing both sides of things.
Speaking from personal experience we have tested the RTN in our area against several different independent static networks and have found the RTN to be less than 0.05' horizontally on almost every check we have done. That's close enough for me for almost everything we do. I'm sold and have been for 2 years now.
QC on Network RTK
> These are comparisons over days, weeks ,months, different days, dfifferent times of day. I mentioned that earlier as well.
Well, what I read above was:
>"I have, and regulalry do on an NGS baseline over 1000' long and have comfortably been under 1cm H over that distance, repeated over years."
It sounded as if you were taking positions on the pair of baseline monuments at some airport or otherwise unobstructed location in rapid succession and computing the distance between the monuments from them. Have you computed the apparent standard errors of each positions of a baseline monument from the whole series?
Probably the most realistic test method would be to compute the pooled RMS error from repeats on real world control points. Pairs of positions on different days would be somewhat statistically inefficient, but would be at least realistic.
QC on Network RTK
> Ok, go design the perfect test. Execute it. Run your analysis, then get back to us.
>
> This backseat driving is getting really tiring.
Hey, you're the fellow who is making the claims for the performance of your system. You say you've tested it for years now, so is it that unreasonable to think that you'd be able to quote an RMS value of positional uncertainty for a position delivered by the system that is supported by some test data representative of how the system might be realistically used?
The target is to be able to obtain positions that are sufficiently accurate that the relative uncertainties of those positions will be less than 3cm at 95% confidence level. This is what the ALTA spec requires and Texas standards of practice do, ignoring the part of the allowable uncertainty that is distance dependent.
NTRIP mountpoints
great thread, and timely for me.
new question - when entering a new CORS station to be used in RTN, how exactly does the adding the mountpoint along with the IP address and port # affect the results? would we see different raw data without the mountpoint? thanks.
QC on Network RTK
> Or was that contering claims by someone who does not use the tool that it does not do the job?
Well, the analysis of errors isn't a hands-on operation, is it? If you know the standard error of a measurement produced by some unknown process with random, normally distributed errors, you can still draw all sorts of valid inferences about arithmetic combinations of measurements produced by that process. This is a fairly fundamental point. It's why manufacturers specify the accuracy of their equipment in the first place.
You surely aren't suggesting that someone should just buy the equipment and then decide whether it meets some accuracy specification or not, right?
QC on Network RTK
> The target is to be able to obtain positions that are sufficiently accurate that the relative uncertainties of those positions will be less than 3cm at 95% confidence level. This is what the ALTA spec requires ...
But I don't think the ALTA spec requires that you produce a LS report mathematically proving that your positions are within the spec. If you follow a procedure that may be reasonably expected to produce positions that meet spec that is sufficient. There are ways other than a Star*Net report to assure QC. Redundant RTN positions is such a way.
There are limits to what you should do with an RTN. It isn't the answer for everything. I would use it where I might otherwise use OPUS. If I were contracted to put project control on a state plane so it would fit with some GIS database it is the cat's meow (providing some redundancy). But if I where contracted to set a control point and state its precision I wouldn't use it for that.
For tying a group of adjacent monuments I'd use it to set a pair of control points, do the ties with the TS, and allow the RTN point positions to float a little in my adjustment. For transportation project control points 500-1000 feet apart along a highway I will RTN maybe every 3-5 points, run local vectors or traverse between each monument, float the RTN positions in my LS adjustments. The final coordinates would probably be within a few hundreths of the RTN position.
NTRIP mountpoints
Liz, when storing vectors you are getting a single baseline solution, storing positions only allows use of a virtual base near you thus a shorter baseline. The only reason I see to store vectors and not just the positions is the ability to use the vectors for post processing. I have done this many times when I want to combine total station and level data with RTN data. I use starnet to analyze and adjust the data. We validate the positional accuracy by checking into passive monuments near the project site periodically.
Joe, I think the mountpoint determines the type of correction you are receiving one IP might have several mountpoints. This would affect the positional accuracy example using a DGPS mountpoint.
NTRIP mountpoints
Mark -
thanks, I was thinking it has something to do with the format the data was transmitting.
QC on Network RTK
> But I don't think the ALTA spec requires that you produce a LS report mathematically proving that your positions are within the spec. If you follow a procedure that may be reasonably expected to produce positions that meet spec that is sufficient. There are ways other than a Star*Net report to assure QC. Redundant RTN positions is such a way.
Well here are the literal provisions of the current ALTA/ACSM specification :
E. Measurement Standards - The following measurement standards address Relative Positional Precision for the monuments or witnesses marking the corners of the surveyed property.
i. "Relative Positional Precision" means the length of the semi-major axis, expressed in feet or meters, of the error ellipse representing the uncertainty due to random errors in measurements in the location of the monument, or witness, marking any corner of the surveyed property relative to the monument, or witness, marking any corner of the surveyed property at the 95 percent confidence level (two standard deviations). Relative Positional Precision is estimated by the results of a correctly weighted least squares adjustment of the survey.
v. The maximum allowable Relative Positional Precision for an ALTA/ACSM Land Title Survey is 2cm (0.07 feet) plus 50 parts per million (based on the direct distance between the two corners being tested.
Note that the 3cm I quoted is actually the Texas minimum technical standard exclusive of distance-dependent allowable error component. The ALTA spec is 2cm.
As I understand the ALTA/ASCM spec, if you're using network RTK to make a survey, you just add a note explaining that you couldn't comply with the specified relative positional precision because ________.
NTRIP mountpoints
Although I don't know of any operators that do it, but you could have different mountpoints for different datums, i.e. one for 2011, one for 1992, and even one for NAD27, although that would be difficult for a large area due to distortions. At least it is theoretically possible, don't know if the RTN software would allow that.
Another use would be to connect directly to a certain base rather than use the VRS concept of a virtual base.
I'm sure Gavin could add a lot more to that, if he hasn't gone and committed harikari after his brush with Kent.
QC on Network RTK
Well, I really don't know why it should be like pulling teeth to have a discussion of survey accuracy. Some of us care about it and think it actually matters.