@bushaxe?ÿ
Well in that case I owe you a great big thank-you, because when I switched from just being the guy on the ground executing the observation plan to actually planning the project myself, I relied most heavily on that manual.
I also have Precision Surveying by Ogundare, Geodetic Network Analysis by Kuang, and of course Ghilani's Adjustment Computations for reference, but the USACE manual was far and away the best resource.
On the first project that I ran myself, using the techniques outlined in there helped us detect a tiny amount of sloughing in a roadway behind a soldier pile wall shortly before the edge collapsed. It turned out to have minimal safety implications, but we were continually asked whether our data was correct, and because we did our due diligence, it was.
On that one, the client also contested our fees for monitoring (well over six figures of T&M in addition to our scoped construction ) but since we had backed up our procedures with a detailed plan (referencing the manual) that clearly stated what it would take to achieve their goals, as well as how much time it would take, we took the wind out of their sails almost immediately and got paid in full.
Not only does shoddy work incur a great deal of liability, but it also leaves money on the table and incorrectly convinces clients that monitoring is a trivial task...
I don't have a problem with bucking in for certain applications, and it can be a great tool like you mentioned upthread. It just didn't seem right for monitoring, though, and it was really throwing me when my network adjustments returned incredibly strange results in the statistics. It's a good tool to have in the toolbox but I just can't work with the data in the way that the monitoring analysis requires.
Yeah, they have additional problems with how they are sighting (or rather, not sighting) the monitoring points themselves. I can't get a straight answer from anyone about exactly what they are doing, but an office tech mentioned that the are just staking out but not sighting the points (turning record and hitting measure) plus the raw data and their raw checks indicate way, way better checks than I have ever seen before. Something's fishy, but again I'm backing off. I've made my concerns known in emails saved and backed up on my personal PC...
Would love to see that "bucking in" stuff input to least squares. They are monitoring "something" but it sounds like a bunch of one-third-giraffe scale monkey business.
?ÿ
I've yet to do a monitoring project but it looks to me like there would be zero difference between sliding the instrument around to fit the reference points and not having any reference points at all.
Would love to see that "bucking in" stuff input to least squares. They are monitoring "something" but it sounds like a bunch of one-third-giraffe scale monkey business.
?ÿ
I was getting reference factors of 0.1 to 0.2 when using standard errors that I know for a fact are correct for our instrumentation.
For comparison, a tight network run by a crack crew with recently-adjusted instrumentation typically runs in the 0.75-0.90 region. Usually right on the edge of failing the chi-squared test. Our less-precise work might be in the 1.2-1.5 range.
Neither would raise red flags, but 0.2? I don't buy it.
It just smells. And it's really hard to explain that to folks who have no idea what I am talking about.
The part about "staking to" the monitor points but not recording anything, combined with sliding the instrument around until it has zero residuals from the record control, all sound like a method to report minimal movement every time, rather than actually monitor anything.
?ÿ
The procedure described in the original post sounds like a lot of work that fails to achieve the desired goal.?ÿ I'd hate to have to defend it in court.
I have a small monitoring project that's going on 4 years now.?ÿ Nothing real critical -- it's a storm drain pump station -- so I'm happy with 2-sigma vertical errors less than 0.02' (they mostly run between 0.005' and 0.015').?ÿ Monitoring frequency varies; sometimes my client wants me out there every month, sometimes every 3 months.?ÿ
Everything is shot from one setup, but there was no convenient way to set a stable point there, so I treat it as a new point every monitoring event, checking to 3 offsite stable points at the start and end of each event.?ÿ Residuals to the stable points run between zero and 0.006'.?ÿ
I tape the HI of the setup each time just to get in the ballpark, but the coordinates for the setup point stay remarkably consistent despite its unreliability (it's a cup tack in a pavement headerboard) and the none-too-rigorous nature of the HI measurement.?ÿ The point coordinates have changed horizontally over the years by less than 0.02'; the vertical is sloppier as might be expected, showing as much as 0.03' difference.?ÿ It's hard to say how much of that is point stability, and how much is setup error, but it doesn't affect the monitoring either way.
I agree that the monitoring procedure is not ideal or what I'd do, but that's the way it was setup.?ÿ Changing the procedure midstream is not necessarily a better option.?ÿ
I would have done as asked and simply written in my journal that I did not agree with the process but felt that reinventing the procedure would cause greater harm.?ÿ If you're seriously concerned about professional liability, take a picture of your written objection next to a newspaper with a prominent date and there will be little chance of you getting popped in court for gross negligence.
?ÿ
I agree that the monitoring procedure is not ideal or what I'd do, but that's the way it was setup.?ÿ Changing the procedure midstream is not necessarily a better option.?ÿ
I would have done as asked and simply written in my journal that I did not agree with the process but felt that reinventing the procedure would cause greater harm.?ÿ If you're seriously concerned about professional liability, take a picture of your written objection next to a newspaper with a prominent date and there will be little chance of you getting popped in court for gross negligence.
?ÿ
A sad reflection on the current legal atmosphere--almost sounds like you are a hostage.
I agree that the monitoring procedure is not ideal or what I'd do, but that's the way it was setup.?ÿ Changing the procedure midstream is not necessarily a better option.
I should have made it clear in my earlier posts. This was brought to me literally the day after they started their "baseline" (using that term very loosely) "measurements" (also using that term loosely).
I would have done as asked and simply written in my journal that I did not agree with the process but felt that reinventing the procedure would cause greater harm.
I'm also not the crew chief on this - I'm a licensee who was asked to retroactively design processing and analysis procedures to fit their dubious measurement scheme, because the PM and associated leadership failed to do their jobs in the first place.
Reinventing the procedure could have been done. There was ample time to do it right, which is why I brought it up in the first place. And even if they were already well into the campaign, it would be one day's work to come up with a measurement scheme, properly observe the network and lock down both reference and monitoring points for future analysis.
Starting from scratch and doing it right would still be better from a public safety, liability, and best practices standpoint. I doubt that anything will go drastically wrong, but the small chance that it might is why we are getting paid good money to develop rock-solid procedures and take on a lot of liability.
@jaccen?ÿ
Well, I'm not likely to stick around after this incident, that's for sure. We blow a lot of money on lip service to our "health & safety culture" so we can put the sales pitch to our clients, but when it comes to actual public safety suddenly there's no time or money to do it right.
This was brought to me literally the day after they started their "baseline measurements" .
I worked for several years for one of the largest Portland area outfits, which had this management style:
1. Send crew to job with minimal instructions to set up program.?ÿ
2. Allow them to flounder around for several weeks until the budget is used up, or nearly so.?ÿ
3. Assign job to office LS to prepare map/report and sign off.?ÿ?ÿ
4. Criticize LS product for not following policy.
5. Invent policy du jour.
6. Bitch to LS about missed budget.
I've worked in several offices since then and find that this is fairly common. Even guys who railed against such management style and go into business for themselves end up doing it.