Two quick questions from a non-surveyor.
1. When you do a gnss measurement half-way between two points that are 1,000 meters apart as a check, aren't you using a lower-order measurement to check a higher-order one? With a 0.007 meter error, 1000/0.007 = 1 : 142,000 while 500/0.007 = 1 : 71,000.
2. You can settle a disagreement between Kent and me. Is the 0.007 error absolute or is there variability in that number as well? Is there an absolutely guaranteed error of, say, 7 mm every time you set up the receiver or can that vary maybe from 0 mm to 50 mm? Or maybe some other range?
Jt50 has a point, networking GPS data for control and filling in with robots has made LS mostly inconsequential, at least for me. Clearly many LS adjustments happen behind the curtain and they are often incorporated in the work flow so it's seamless.
jt50 is doing what we have done for decades, running in GPS control for highways and filling in small traverses to pick up detail between major control. Generally with modern equipment there are very tiny misclosures. Usually it's not worth any time spent adjusting out .01' on a temporary point. The last time I used starnet was for an ALTA and 4 control points, I think I was adjusting out .03' for a robot traverse. Of course levels get adjusted all the time as a push this button task.
LS is adjusting GPS control, even putting out large reports with graphs, closure data, ect. Those reports fill up pages and are generated in seconds of pushing the adjust button. After that GPS and terrestrial observations are merged together without any special manipulation beyond corrects settings in the DC and instrument. This has been the process for decades, just as jt50 reports, even my TOPCON 301 is setup to collect correctly.
I used to spend hours of time, usually during lunch in the field or down time at the office, adjusting long traverses or reducing solar observations, today, I don't do either.
Time spent adjusting is...??.pfffft.
Today I will be running out a mile of line up a mountain, in and out of the timber, I don't expect to get a gun out, already we started the line in deep pine timber and the R10 handled it without even trying, no adjustments needed.
Cross checking and cross tieing existing points has become so tight it begs the question why adjust; is that .02' across the mile of section line important to adjust out, is it even realistic?
I am not sure why everyone is upset about my opinion that LSA is obsolete. The purpose of LSA is to determine the most accurate position of a certain point from several observations. If you are able to determine a certain point's position using the best method available then why not use it? GPS results offer that high accuracy output that is faster and more accurate than your traverse on any given day. Why say that LSA would give you better network precision when a network of GPS points would be that precision that you are after using LSA.
Quality control, data validation and network analysis are not outdated now just because GNSS manufacturers claim high precision. Did we stop adjusting traverses by compass rule when sub-1-second total stations hit the market?
Least squares analysis is not simply about finding the most likely values of a point. Least squares analysis returns the precision of computed values in relation to other points in the local (and global) network, which is every bit as important as the location itself. Moreover, it gives us a procedure by which we can statistically prove that our methodology and data are sound and our results reproducible.
If you are doing ALTA surveys, for instance, and not performing LSA on your data, you are not meeting the standards of care outlined in the 2016 Minimum Standard Detail Requirements. In other words, you are in violation of your contract.
Those standards require LSA because it fulfills the critical task of relating the various found monuments to each other in a quantifiable way. A single control point or boundary monument does not exist in a vacuum, just as no parcel stands apart from the adjoining parcels.
Sometimes the results of LSA may not change the pre-adjustment (a priori) values very much at all.
Does this mean LSA is worthless? Not at all.
Does it mean that you have to perform it for every dataset? Not necessarily.
But is it best practices? Yes. Until someone else comes up with a better way. And asserting that it is obsolete is ignorant in the extreme.
?ÿ
jt50. I warned you about what you were getting into... There are certain things one just doesn't say or admit to on this site.
Of all the surveyors I know, I have only talked to one who even bothers coming to this site. And, his advice to me was to be careful what you admit to because, in his words, "This is a forum where people who consider themselves elite, and far superior to Joe Schmoe types, live and stand at the ready to humiliate those they judge to not be up to their level".
My practice has taken me from the mountains of North Georgia to as far South as Miami, and many burgs in between as well as dipping into Alabama from time to time. Was first exposed to surveying in 1959 when I had my first college level course in Elementary Surveying. I have been around and I think I have a fair handle on how a large number of surveyors work in this area of the country.
As for Least Squares, of all the surveyors I personally know the only ones that use LS are the very few who accept ALTA work. For the last 20 years about the only traversing any of them would do could be called a "neighborhood" traverse. Once or twice a year a few of them may traverse around a section, where after balancing angles they will then adjust the half foot of error using compass rule.
Twenty or thirty years ago when I first started doing ALTA surveys I was hoping to find someone to help me get up to speed with Least Squares. Ended up having to learn it on my own. Today, the only time I ever use LS is when I have an ALTA survey, and that's because its use is mandated. It's not that no one has a LS program, most all of them has SurvNet because they use Carlson. The vast majority of all the surveying in the areas I work is done with GNSS. And, I'm talking my own work as well as the competition.
Sorry...not trying to rain on anyone's parade. Honest.
I think the 0.007m + 1ppm is a manufacturer's best case scenario that is repeatable based on average usage. In my experience, GPS gives the best results from longer & repeated observations. For example, the following have SD's that are better than the 0.007m instrument's accuracy:
?ÿ
% elev mask : 15.0 deg
% dynamics : off
% tidecorr : off
% ionos opt : broadcast
% tropo opt : off
% ephemeris : precise
% amb res : continuous
% val thres : 3.0
% ref pos : 13.711204140 120.741076942 49.2510
%
% (lat/lon/height=WGS84/ellipsoidal,Q=1:fix,2:float,3:sbas,4:dgps,5:single,6:ppp,ns=# of satellites)
% GPST latitude(deg) longitude(deg) height(m) Q ns?ÿ ?ÿ ?ÿ ?ÿ ?ÿ ?ÿ ?ÿ ?ÿ ?ÿ ?ÿ ?ÿ ?ÿ ?ÿ ?ÿ sdn(m) sde(m) sdu(m) sdne(m) sdeu(m) sdun(m) age(s) ratio
2019/09/02 01:23:29.000 13.714253401 120.744904757 52.7444 1 5 0.0004 0.0005 0.0011 0.0003 0.0005 0.0003 -0.00 3.0
?ÿ
Having standard deviations less than 1mm is based on statistical results with the vertical having the larger deviations as is also stated in the manufacturer's specifications. No surveying instrument could re-check it on the ground and no surveyor would do the fieldwork to verify if the deviations are less than 0.007m. But these given deviations are based on actual computations. Remember that your GPS receiver is taking readings every 1 second or shorter depending on your logging settings. It's similar to your total station taking repeated distance & angle readings every 1 second for use in your LSA.
LSA does NOT give you the most accurate or best solution for any point.
It gives you the most probable and that is based on your input variables.
And that 0.007 mm+1 ppm is actually ?ñ(0.007 mm+1 ppm), not 0.007 mm ?ñ 1 ppm.
I just learned from this here thread that I are ELITE!
where do I pick up my badge?
Really. It is documented in Standards and Specifications for Geodetic Control Networks, published in 1984. It specifies the procedures to achieve various orders of accuracy when triangulating, traversing, and levelling. But it doesn't mention GPS, because GPS was barely a thing then. That specification has been replaced a couple of times over, most recently by the Federal Geographic Data Committee's Geospatial Data Accuracy Standards in various parts. Those standards, themselves 20 years old, dispense with references to orders of survey accuracy entirely. So, yes. That nomenclature is obsolete.
Furthermore, referring to accuracy in terms of loop closure is inappropriate when measuring anything other than a closed loop, and who does that with GPS?
Time spent adjusting is...??.pfffft.
But time spent trapping and correcting blunders is time well spent. The act of running the data through an LS adjustment shows you where the blunders probably are thus facilitating their correction. By the time you have the blunders weeded out you have an adjustment.?ÿ Many times the data has no blunders. In that case I have spent next to no time and I have mathematical proof that my data is (probably) blunder free.
If you are spending a lot of time analysing data that is blunder free you are doing it wrong. If you are accepting data as blunder free without analysis you are letting a lot of bolluxed data out of your office.?ÿ?ÿ
And that 0.007 mm+1 ppm is actually ?ñ(0.007 mm+1 ppm), not 0.007 mm ?ñ 1 ppm.
Substitute m for mm in the above.
In the time that we were arguing over better method, the accuracy has increased to 0.003 m + 0.5 ppm for static accuracy.
In the time that we were arguing over better method, the accuracy has increased to 0.003 m + 0.5 ppm for static accuracy.
That's a 1-sigma value.?ÿ The 2-sigma equivalent (e.g. per ALTA specs) is about 0.007m.
Well, if your story is correct I am glad to hear that the GRX1's specifications finally caught up to the industry's old norm.
I have some old Leica SR530 receivers. In 1999, Leica published the specifications on those GPS-only receivers to be:
Accuracy, baseline rms: Accuracy in position = baseline rms. Accuracy in height = 2 x accuracy in position
Baseline rms with post processing
Static, long lines, long observations, choke-ring antenna: 3mm + 0.5ppm
Static and rapid static with standard antenna: 5mm + 0.5ppm
Yes, I see the stats, but I'm not sure about their interpretation. And it's certainly not essential that I understand, but some other folks might have similar questions.
For example, consider the attached snippet from an NGS datasheet:
Note that the network standard deviations are similar to the ones on your report, but the network accuracy is significantly larger, so we see a difference between positional accuracy and standard deviations.
Going deeper into the network by clicking on the link in the data sheet gives us this for points in the network:
Again, positional accuracy and standard deviation differ by an order of magnitude.
So, what positional accuracy are you getting? is it better than NGS gets? How do you know?
I learned a lot from Larry as well. I have a copy of his class on SurvNet. It was good! I don;t think anyone took up the reigns of the book company after he got sick.
But unfortunately LS will not find a blunder in a GNSS position alone. You would need a traverse run through the point as well in order for LS to indicate you might have had a "bad fix". In the GNSS RTK era we need statistics on each individual occupation showing likelyhood of a "good fix" or not, before running through LS if the LS report is to have any meaning. Unless of course we run a traverse through every point, which defeats the purpose in many RTK instances.
NY is proposing MTS that will require same reporting as ALTA for every survey. So LS not likely to go away soon here.
But unfortunately LS will not find a blunder in a GNSS position alone.?ÿ You would need a traverse run through the point as well in order for LS to indicate you might have had a "bad fix".....
You need redundant measurement but it doesn't have to come by traversing. A 2nd RTK vector to the point taken with some time offset would be minimally sufficient.
Always keeping in mind that these specifications are for factory fresh receivers working on the test bench under laboratory conditions. Real world results vary, always to the worse. And they are for errors caused by the receivers only. It says nothing about centering errors, multipath, etc.