I think it is a given that he does not know how good his data is, considering his disdain for data quality control and analysis, and his complete misunderstanding of how least squares works and what it is used for.
From a previous post:
"I remember our LSA (least square adjustments) professor in college who said LSA is a form of legal data doctoring. It's like all your loops & spur lines don't agree with each other and you don't know which is correct so you use a difficult math process to justify your errors and select a final coordinates for points with errors."
If his professor did indeed say such a thing, the ignorance starts to make more sense. And I would question the professor's competence to instruct at an institution of higher learning...
But isn't that just indicating one of the two is a "bad fix" outside the LS software analysis? My understanding is that LS software would still not be able to tell you which is good and which is bad, and you already would know one is bad. In a noisy area the software could just as easily indicate the "bad fix" is better statistically when in reality it's 2 feet off.
In a noisy area the software could just as easily indicate the "bad fix" is better statistically when in reality it's 2 feet off.
That's where experienced judgment comes in.?ÿ Knowing under what conditions *not* to use the tool is just as important as knowing how to use the tool.?ÿ The terrestrial analogy is taking total station shots through window glass.?ÿ Sure, you can do it, but what do you know about the resulting measurement?
The folks who say they use GNSS for all measurements are either working in areas without trees or buildings, or spend a lot of time taking redundant measurements, or don't know what they're doing.?ÿ The first two groups include some informed and responsible surveyors, the latter group not so much.
Yes absolutely, and that's all I'm trying to point out.?ÿ Because, many folks think least squares is a check for blunders in GNSS RTK, when it isn't.?ÿ What it will do is give a more conservative estimate of error usually (and the required ALTA format for reporting error) than the rms reported from the kalman filter solution used in GNSS receivers.?ÿ But you have to somehow prove a decent solution for each point first.
Never said I didn't use it, I explained that it's doing it all the time in the regular work flow. GPS, networks, levels, ect. My point being is that it isn't consuming much time these days.
A whole network gets adjusted with a button push and a print out on the computer. There isn't much to look at if all was done correctly in the field.
These days everything is so tight, but of course error trapping is done all the time, it's not an exclusive exercise limited to LS.
A tip of the hat to those that triple cross tie everything, building nets and processing through Least Squares, doggedly seeking to eliminate every last bit of doubt and slop, regardless whether small or large survey. And, I'm being dead serious. Assuming you always back up the talk with results, as far as I'm concerned you will have earned the "expert measurer" title. For sure, that is an important part of being a surveyor. I'm trusting, however, the confidence and pride in those measurements don't contribute to the "my measurement is better than your measurement" attitude that ofentimes leads to pin cushions.
Anyways, long live LS, and long live the elites.
I've always found the math (not necessarily the computations) to be difficult. At the very least, the professor should explain himself. If he means that knowingly violating the underlying assumptions gives doctored data, then provisionally he's ok. But as a general statement, it's worth less than nothing.
Here's a short paper that gets to the heart of why someone should use lsa. Go straight to the first example on page 3 and look at the situation described. Different ways of measuring the three segments produce different results. So how do we determine the most probable value? We use a statistical process that's based on rigorous, but reasonable, assumptions. The results are, under those assumptions, the best values that the data will support.
https://www.ijsr.net/archive/v3i8/MDIwMTU0ODU=.pdf
Here's one from Trimble that's more in line with the current discussion. It's the NMEA-0183 error ellipse calculation output. That, I think, is what we need to consider when we're looking at the positional accuracy of a single point instead of standard deviations.
https://www.trimble.com/OEM_ReceiverHelp/V4.44/en/NMEA-0183messages_GST.html
I really don't know the answers to the questions I asked, which is why I asked them. Answers would be greatly appreciated.
that LS software would still not be able to tell you which is good and which is bad
True, but 999 times out of 1000 it is going to tell you that the multiple vectors substantially agree, and are therefore, presumably, correct.?ÿ?ÿ
I could be wrong, but my understanding is that's not true with RTK.?ÿ Static on a known point is different.?ÿ A receiver fixes in rtk by getting repeated vectors that are in agreement.?ÿ In a "bad fix" the vectors are repeating in the same manner as a "good fix" and least squares can not tell the difference because there is none.?ÿ Covariance matrix looks the same in both situations.?ÿ Hence all the collecting features and procedures by differing manufactures trying to eliminate the "bad fix" from happening.
Interesting article by a Calgary professor that some of the elites might enjoy:)
I have tested this myself.?ÿ Accepted what I was confident was a "bad fix" and ran it through least squares.?ÿ Report said it passed just fine.?ÿ Shot with total station and was over a foot off.
I could be wrong, but my understanding is that's not true with RTK.?ÿ
These days when I'm using GPS I'm using RTK exclusively - except for when I'm resolving positions for my RTK base with OPUS.?ÿ ?ÿRTK ,with the addition of GLONASS, and the others, has become very reliable even in the marginal conditions we have in the PNW. Some positions can be of lower quality, but outright "bad fixes" are very rare. Much greater concern to me are errors from centering, measure ups, point numbering, etc. These things can be fixed in LS.?ÿ ?ÿ
A tip of the hat to those that triple cross tie everything...
Let's not get silly. There is a lot of running room between "triple cross tie everything" and having some redundancy in your measurements. And there is an appropriate space in every project to spend some time on establishing good control, then there is a space to get some production done based on that good control.?ÿ Getting some redundancy into the appropriate places does not need to take a lot of time, neither does running an LS on such data - when it is blunder free.
Trapping blunders can sometimes suck up quite a bit of time, but the alternative is turning your back on F'd up data. "I don't do LS because it takes a lot of time" is equivalent to saying "I send out bolluxed data and I don't care", because the only way LS takes a lot of time is if your data is bolluxed.