So the task for tomorrow is to mark certain lines and corners of a couple of tracts of land in downtown Austin. I had made a survey of the area sixteen years ago and had done what I thought was a thorough job of retrieving key boundary evidence. Since then many of the more important survey markers have been lost to the general destruction of redevelopment. Early 20th century sidewalks with marks along offset lines run in the 1930's by the City Engineer parallel with the established centerline of a street are mostly gone.
Fortunately, in 1999 I had the sense to set lots of my own survey markers on the theory that some would survive. Some did. I was surprised to see that the hard brass discs were not holding up as well under foot traffic as just the 3/8-inch steel spikes with stamped aluminum washers were. The spikes and washers looked good. They would look better if some parade of idiots hadn't given them blasts of flourescent spray paint to make the stampings harder to read, but as long as there is flourescent spray paint and it is sold without regard to common sense or intelligence, that will happen.
I found my old Spikes and Washers 6, 678, 679, and 680 in place, the first in a concrete curb and the rest in concrete sidewalk, probably soon to be replaced with something jazzier looking.
The task was to (a) check the old control points, (b) establish some new ones (250 and 251) to use for setting out boundary line markers and corner monuments, and (c) connect them into the existing network so that their positions could be determined in relation to all of the other boundary evidence found in place in 1999. There are several ways to have done that, but the easiest was just to add the new conventional measurements to the old network and just rerun the network adjustment in Star*Net.
Considering that the 1999 work was adjusted in Star*Net and the data format remains compatible, all that was needed was to just add the new measurements. As it turned out, the marks were more than reasonably stable horizontally. There had probably been some vertical movements over the last sixteen years, but it wasn't enough to be a problem.
After adding today's measurements to the network, there were some small shifts in the horizontal coordinates of the spike and washer control points as the following lists show:
[pre]
1999:
6 10071003.471 3112840.795 481.948 SPIKE.WASHER
678 10071211.433 3112636.092 476.166 SPIKE.WASHER
679 10071198.246 3112680.156 476.522 SPIKE.WASHER
680 10071157.831 3112815.143 479.352 SPIKE.WASHER
2015:
6 10071003.471 3112840.797 481.943 SPIKE.WASHER
678 10071211.432 3112636.093 476.136 SPIKE.WASHER
679 10071198.250 3112680.156 476.495 SPIKE.WASHER
680 10071157.829 3112815.142 479.363 SPIKE.WASHER
[/pre]
The residuals of the new measurements demonstrated that adding the new measurements to the networks didn't reveal any blunders. That is, you couldn't tell just from the residuals that today's work hadn't been done sixteen years ago and adjusted with everything else in 1999.
[pre]
Adjusted Measured Geodetic Angle Observations (DMS)
From At To Angle Residual StdErr StdRes
680 679 678 179-59-07.85 -0-00-04.10 8.76 0.5
680 679 678 179-59-07.85 -0-00-04.65 8.76 0.5
680 679 250 89-37-13.01 0-00-01.51 4.14 0.4
680 679 250 89-37-13.01 0-00-01.51 4.14 0.4
680 678 251 90-15-29.44 0-00-00.94 4.06 0.2
680 678 251 90-15-29.44 -0-00-00.81 4.06 0.2
679 250 251 288-34-37.47 0-00-05.22 7.60 0.7
679 250 6 92-30-21.69 0-00-02.44 3.74 0.7
679 680 6 243-53-35.91 0-00-03.66 4.03 0.9
679 680 6 243-53-35.91 0-00-01.41 4.03 0.3
Adjusted Measured Distance Observations (FeetUS)
From To Distance Residual StdErr StdRes
679 680 140.9316 0.0004 0.0065 0.1
679 678 46.0080 0.0010 0.0063 0.2
679 680 140.9316 -0.0007 0.0065 0.1
679 678 46.0080 0.0005 0.0063 0.1
679 680 140.9316 -0.0014 0.0065 0.2
679 250 132.7859 -0.0009 0.0065 0.1
679 680 140.9316 0.0003 0.0065 0.0
679 250 132.7859 -0.0008 0.0065 0.1
678 680 186.9278 0.0026 0.0066 0.4
678 251 117.2215 0.0060 0.0065 0.9
678 680 186.9278 0.0018 0.0066 0.3
678 251 117.2215 0.0102 0.0065 1.6
250 679 132.7740 -0.0023 0.0065 0.4
250 251 49.9184 0.0042 0.0063 0.7
250 679 132.7740 -0.0048 0.0065 0.7
250 6 209.1520 0.0055 0.0066 0.8
680 679 140.9614 0.0001 0.0065 0.0
680 6 156.5102 -0.0041 0.0065 0.6
680 679 140.9614 -0.0028 0.0065 0.4
680 6 156.5102 -0.0041 0.0065 0.6
Adjusted Zenith Observations (DMS)
From To Zenith Residual StdErr StdRes
679 680 89-07-34.00 0-00-11.60 15.94 0.7
679 678 91-20-12.54 -0-00-17.96 39.32 0.5
679 680 89-07-34.00 0-00-14.25 15.94 0.9
679 678 91-20-12.54 -0-00-18.96 39.32 0.5
679 680 89-07-34.00 0-00-12.00 15.94 0.8
679 250 90-58-58.90 0-00-19.15 16.54 1.2
679 680 89-07-34.00 0-00-16.75 15.94 1.1
679 250 90-58-58.90 0-00-17.65 16.54 1.1
678 680 89-13-19.99 0-00-19.49 13.70 1.4
678 251 91-39-34.98 0-00-02.48 17.97 0.1
678 680 89-13-19.99 0-00-22.74 13.70 1.7
678 251 91-39-34.98 0-00-01.73 17.97 0.1
250 679 89-23-10.28 0-00-19.03 16.55 1.2
250 251 91-53-20.40 -0-00-12.10 36.43 0.3
250 679 89-23-10.28 0-00-18.78 16.55 1.1
250 6 88-07-08.53 0-00-05.28 13.15 0.4
680 679 91-28-03.36 0-00-07.86 15.94 0.5
680 6 88-57-46.42 -0-00-10.33 15.00 0.7
680 679 91-28-03.36 0-00-08.86 15.94 0.6
680 6 88-57-46.42 -0-00-09.83 15.00 0.7
[/pre]
I probably should have underscored the point that the value of having all of the surveys I've made in that vicinity over the years in one network adjustment is that it is possible to give reliable estimates of the relative uncertainties between any two things located by the various surveys over the years, including in particular the phantoms, the survey monuments that no longer exist, but that are perpetuated by the ties to them in the network adjustment.
The other thing that I'd underscore is the power of using high-quality third-party software such as Star*Net to keep the network adjustment viable. Sadly, there are not that many survey application programs that have kept backwards compatibility the way that Star*Net has. Happily, Star*Net has done it, which means that the input data format to a project adjustment run sixteen years ago remains viable and pretty much ready to run with more stuff added.
> The other thing that I'd underscore is the power of using high-quality third-party software such as Star*Net to keep the network adjustment viable. Sadly, there are not that many survey application programs that have kept backwards compatibility the way that Star*Net has. Happily, Star*Net has done it, which means that the input data format to a project adjustment run sixteen years ago remains viable and pretty much ready to run with more stuff added.
Amen.
> I probably should have underscored the point that the value of having all of the surveys I've made in that vicinity over the years in one network adjustment is that it is possible to give reliable estimates of the relative uncertainties between any two things located by the various surveys over the years, including in particular the phantoms, the survey monuments that no longer exist, but that are perpetuated by the ties to them in the network adjustment.
>
So , are you saying that if one is going back to a project to take additional measurements, partially because some of the earlier measurements were not as good as they could have been (in my case, not yours), it's better to add the previous data to the "soup", rather than fishing out the bad onions?
If the earlier data doesn't contain blunders but just high residuals, is there a cut off point as to what to keep and what to toss?
"...probably soon to be replaced with something jazzier looking."
The great streets program is absolutely eviscerating control downtown. Unfortunately, there is a lack of some combination of concern, power, and/or time at the city to do anything about it. I've begun submitting reports to the GIS department every time I find a missing city control monument. If for no other reason than to hopefully eliminate the goose chases that often result from dialing up a data sheet from the website.
>I was surprised to see that the hard brass discs were not holding up as well under foot traffic as just the 3/8-inch steel spikes with stamped aluminum washers were.
>
off topic 0.04' 🙂
... were those brass discs set with the tops below the surface of the sidewalk?
(shovels eat things)
> So , are you saying that... it's better to add the previous data to the "soup", rather than fishing out the bad onions?
I think that the point is to have a tool for identifying and evaluating the "bad onions" in the first place. Then to make judgments about which of them to keep.
> ... were those brass discs set with the tops below the surface of the sidewalk?
No, they were set flush. It's apparently shoe leather that is wearing them down. I'll have a look again and take photos.
I've noticed a few "newer" brass caps over the years that have worn significantly by foot and vehicle traffic. If my memory serves me correctly, most of the monument manufacturers quit using lead in their red brass (85-5-5) in the late 1980s because of lead's "anti-environmental" nature.
God knows what they use nowadays. I think some use silicon instead of lead in their recipes. I bet none of the manufacturers have ever performed any "wear" tests on their products.
> So , are you saying that if one is going back to a project to take additional measurements, partially because some of the earlier measurements were not as good as they could have been (in my case, not yours), it's better to add the previous data to the "soup", rather than fishing out the bad onions?
>
> If the earlier data doesn't contain blunders but just high residuals, is there a cut off point as to what to keep and what to toss?
Neither is the case here. This is an area where the challenge is to maintain survey control in known relationships to the now-vanished marks with uncertainties that are as small as possible. The work described yesterday is just what was necessary to integrate two new control points into the existing network.
That integration had two elements (a) verifying the integrity of the existing marks upon which the positions of the new marks were based and (b) positioning the new marks by methods that keep the relative positional uncertainties acceptably low.
A more careful approach would have been to adjust yesterday's work as a separate, minimally constrained adjustment to verify that it passed the chi-square test and was free of blunders. Only after comparing the coordinates of the common network stations to verify that none showed obvious mark movement would a careful person have dropped the new work into the overall network adjustment.
I did something similar by just comparing the coordinates generated by the data collector to prior values.
> God knows what they use nowadays. I think some use silicon instead of lead in their recipes. I bet none of the manufacturers have ever performed any "wear" tests on their products.
It's pretty much an argument for bronze or maybe even cast iron. I decided to set brass discs flush in the sidewalk so that they weren't a hazard to pedestrians, but didn't realize how heavy the foot traffic would become. Even the old chiseled marks in sidewalks are wearing away.
> A more careful approach would have been to adjust yesterday's work as a separate, minimally constrained adjustment to verify that it passed the chi-square test and was free of blunders. Only after comparing the coordinates of the common network stations to verify that none showed obvious mark movement would a careful person have dropped the new work into the overall network adjustment.
>
> I did something similar by just comparing the coordinates generated by the data collector to prior values.
This is similar to what I do with an ongoing network in my city's downtown area. I started it over 20 years ago and add to it regularly; it now encompasses over 50 city blocks. When I perform work in an area, I append the new data to the input file -- identifying it by job number and date -- and run the adjustment. Then I look at the old and new coordinates in the area of the new work to be sure nothing has changed significantly. If I only see changes on the order of 0.02' or less, I accept the new adjustment and move on. So far that's been the case.
One mistake I made in the early years was to start it a 2D adjustment. Back then I would run levels if I needed vertical, and didn't think I'd ever need 3D in the adjustment, so didn't bother. I've been slowly converting the whole thing over to 3D, but it take a lot of effort to dig through old field notes and raw data files to upgrade the network to reliable 3D. I've also been adding some RTK vectors here and there, but so much of the area is obstructed by buildings and trees that I haven't put a lot of time into that yet.
> I've also been adding some RTK vectors here and there, but so much of the area is obstructed by buildings and trees that I haven't put a lot of time into that yet.
When I got survey-grade GPS back in the 90's, one of the first things I did was to tie various points in the downtown network by GPS vectors to check the orientation of the network that had previously relied upon near-geodetic azimuths from solar observations reduced to grid azimuths. I don't envision ever needing to improve the network orientation now, but would rely upon GPS to control some new extension of the network.
> I think that the point is to have a tool for identifying and evaluating the "bad onions" in the first place. Then to make judgments about which of them to keep.
Well that's exactly the question: What goes into the judgement? Is it better to have, say a dozen redundant measurements, half of which have residuals say twice the other half, or just dump the ones with the highest residuals? Is there a mathematical way to determine where to draw the line? I may not be asking the question the right way. What constitutes a "bad onion"? Is it relative to the other "onions"? Or just "anything higher than xx" in angle, or .00X' in distance?
> Well that's exactly the question: What goes into the judgement? Is it better to have, say a dozen redundant measurements, half of which have residuals say twice the other half, or just dump the ones with the highest residuals?
Where a network is built over time, one will usually be dealing with measurements of differing qualities. As long as each installment is free of blunders and is properly weighted, a surveyor would want it all in the adjustment.
> Is there a mathematical way to determine where to draw the line? I may not be asking the question the right way. What constitutes a "bad onion"?
As a rule, any measurement with a standardized residual (residual / standard error of measurements) over 2.0 is suspect. A measurement with a standardized residual over 3.0 is presumed to be a blunder. This test requires realistic values of standard errors to work well, obviously.
Over time, movements in marks supposed to be stable can introduce what appear to be errors in measurements. So it's important to devise ways to detect mark movement.
Star*Net is some good stuff.
It handles Trimble conventional data better than TBC unless the data is very simple. I generally don't have azimuth pairs, I'm usually lucky to find a hole in the forest canopy good enough for one point. I'm trying to make it work...so far it can't converge on one big project that Star*Net does on the fourth iteration. I think the problem is it tries to precalc the data which looks very wrong so it can't converge.
Performing an LSA in TBC leaves a lot to be desired. Definitely not user friendly or flexible.
Another advantage to StarNet is the ability to jump multiple versions and still be immediately productive. It is still the awesome Ron Sawyer product. We went from Version 5 to 8 and saved time over fighting Magnet. That includes installation, registration and picking the brains of fellow beer leggers to get the first job cleaned up. Try that with any other survey related software product.
Switching gears- I have long advocated the development of an observation database in a consistent format as opposed to any form of coordinate files. Your point can't be stated strongly enough. A StarNet DAT file can be selected for inclusion in my new project without recreating the wheel. The 'sorting of the onions' has already been done and the weighting determined. If you did it right the first time it will integrate effortlessly now.
> Performing an LSA in TBC leaves a lot to be desired. Definitely not user friendly or flexible.
The same can be said of LGO.
> >
> One mistake I made in the early years was to start it a 2D adjustment. Back then I would run levels if I needed vertical, and didn't think I'd ever need 3D in the adjustment, so didn't bother. I've been slowly converting the whole thing over to 3D, but it take a lot of effort to dig through old field notes and raw data files to upgrade the network to reliable 3D. I've also been adding some RTK vectors here and there, but so much of the area is obstructed by buildings and trees that I haven't put a lot of time into that yet.
Why is 3D more reliable? Does the third dimension add to the number of measurements that are independantly "least squared"?
Also, if using a total station for measurement, would it just be a matter of adding the vertical distance delta from one station to another to have the information in Starnet?