Notifications
Clear all

Accuracy and Precision Opinions Requested (Especially Indiana)

24 Posts
10 Users
0 Reactions
76 Views
(@scott-in-indianapolis)
Posts: 223
Member
Topic starter
 

In Indiana, the code regarding precision and accuracy prescribes a classification for each type (urban, suburban, or rural) of boundary survey and the acceptable "relative positional accuracy" for each type. For example, an urban survey has a relative positional accuracy of 0.07 feet plus 50 ppm.

In Indiana, relative positional accuracy is defined as the value expressed in feet or meters that represents the uncertainty due to random errors in measurements in the location of any point on a survey relative to any other point on the same survey at the ninety-five percent (95%) confidence level. Code states: relative positional accuracy may be tested by: (1) comparing the relative location of points in a survey as measured by an independent survey of higher accuracy; or (2) the results of a minimally constrained, correctly weighted least square adjustment of the survey. I find it interesting that the code doesn't require testing. It just states how it can be tested.

I realize that standards will vary state to state, but the concept will be similar.

In practice, the norm tends to be to GPS the section(s), subdivision control, and/or site control. Then use the total station to measure up the site and set corners.

So here is the question: What are people really doing to justify and/or adjust precision and accuracy? Any Indiana people?

-Scott

 
Posted : March 22, 2017 5:33 pm
(@scott-in-indianapolis)
Posts: 223
Member
Topic starter
 

No bites? I know, this is a boring subject. I'll try one last time.

The focus of my question is this:
1. If we don't run closed traverses, or traverses that run from/to known points, to what do we constrain networks?
2. If we are using real time GPS solutions (RTK, VRS) and not base or OPUS methods, to what do we constrain networks?
3. For 1 & 2 above, how do we justify, analyze, prove compliance with codes dictated for professional practice (see description of Indiana code in original post)?

 
Posted : March 23, 2017 4:13 am
(@paul-in-pa)
Posts: 6044
Member
 

Static GPS 0.02m = 0.07' to start with. You need static OPUS and a static GPS network to be able to prove your field traverse is better than specs so that your side shots to survey markers meets specs.

To prove it your self, take several projects and after you have completed your work and analysis, run a network between you constraint points.

What, you do not want to do a project twice. Sorry that is how you "prove" your precision.

Paul in PA

 
Posted : March 23, 2017 4:30 am
half-bubble
(@half-bubble)
Posts: 944
Supporter
 

It's not so boring, it's the elephant in the room.

### 1. If we don't run closed traverses, or traverses that run from/to known points, to what do we constrain networks?

Winds up being a little iffy ... it's possible to constrain a network to itself if there is enough redundancy. That is, nothing is held fixed, everything has an error elliipse, and the RPA (which ALTA now calls "relative positional precision" rather than "relative positional accuracy") can be calculated between any two points even in a TenThousandLand coordinate system.

### 2. If we are using real time GPS solutions (RTK, VRS) and not base or OPUS methods, to what do we constrain networks?

The a-priori one-sigma error ellipses of the GPS shots connected by the GPS vectors or total station work in least squares. If there are only bare naked coordinates with a-priori error ellipses and no connecting vectors or TS observations, one can guesstimate from adding the two error ellipses and dividing by two, but that's not rigorous, or proof, just a guesstimate.

### 3. For 1 & 2 above, how do we justify, analyze, prove compliance with codes dictated for professional practice (see description of Indiana code in original post)?

A network least squares package of some sort, Star*Net or Columbus or some of the other vendor offerings.
You don't have to survey it twice if you get enough redundancy the first time.
There is always that mythical "independent survey of higher accuracy" which is like punting to the sky gods. No such thing. Use the least squares, use redundant cross ties, print the least squares output including the RPA report for the file.

 
Posted : March 23, 2017 5:05 am
(@scott-in-indianapolis)
Posts: 223
Member
Topic starter
 

Paul, thanks for the answer!

So, not to be argumentative, but I'll play devil's advocate here :imp:. I do agree with your methods of assessment for error. In fact, we have several methods to "prove our error". I think watching your backsight distance and hit one foresight are still great techniques (ie. check shot, 2-4 hundredths, great!). BUT, if we are going to follow the code, we have to run closed traverses (or run from/to known points) and use static GPS followed by error correction (traverse adjustment, compass rule, least squares, etc.).

What I am interested here is not are we checking ourselves and how, but how/if others are addressing error analysis/correction as dictated by code. It seems most (not all) aren't. What are your thoughts?

 
Posted : March 23, 2017 5:18 am

(@scott-in-indianapolis)
Posts: 223
Member
Topic starter
 

Half Bubble - elephant in the room? Exactly! Thanks for the input.

So I agree with the your conclusions. The practical take away here is that most techniques we seem to be using do not have the redundancy to allow for the analysis dictated by code. Can we have good "guesstimates" as you indicate? I think so - in fact, I think they are probably satisfactory.

I just want to know if people are rigidly applying/following the code. How and why or why not?

 
Posted : March 23, 2017 5:25 am
nate-the-surveyor
(@nate-the-surveyor)
Posts: 10522
Member
 

We approached it from a small sampling of redundancy.
Most surveys, we ran closed traverses. Most closures, were 1:20,000 or better.
We occasionally ran additional cross ties.
The cross ties showed that thoughtful adjustment had removed about 1/2 of the raw closure error.
So, run traverse around a section, 4 mile loop. Closure error would be less than a foot. Angular error, around 50".
We'd honestly analyze it, and feather in the angular error, then, pro rate the little left.
When we moved to gps,
Oops.. Gotta go...

 
Posted : March 23, 2017 6:01 am
(@mark-mayer)
Posts: 3370
Member
 

Scott Bordenet, post: 419766, member: 10097 wrote: What are people really doing to justify and/or adjust precision and accuracy?

If you do a least squares adjustment of your data using StarNet you will get a report on relative positional accuracy. Boom.

If you follow procures that normally yield relative positional accuracies within the specified tolerances, you will have met the spec in spirit, at least. Check out Bill Henning's document on RTK positioning.

BTW, that language is identical to that used in the ALTA spec.

 
Posted : March 23, 2017 6:03 am
(@mark-mayer)
Posts: 3370
Member
 

Scott Bordenet, post: 419834, member: 10097 wrote: What I am interested here is ... how/if others are addressing error analysis/correction as dictated by code. It seems most (not all) aren't. What are your thoughts?

My thoughts are that you are quite correct. It isn't being done. But did you ever hear of anybody who lost their licence, or even was board disciplined, because their relative positional accuracy was 0.12'? Are your board member capable of comprehending the meaning of relative positional accuracy?

 
Posted : March 23, 2017 6:09 am
(@jakethebuilder)
Posts: 16
Member
 

Mark Mayer, post: 419844, member: 424 wrote: If you do a least squares adjustment of your data using StarNet you will get a report on relative positional accuracy. Boom.

If you follow procures that normally yield relative positional accuracies within the specified tolerances, you will have met the spec in spirit, at least. Check out Bill Henning's document on RTK positioning.

BTW, that language is identical to that used in the ALTA spec.

I agree, following procedures that normally yield the required accuracies, should be the approach, whether you actually test it or not. It follows guidelines in the NSSDA Standard: "This data was compiled to meet" a certain accuracy.

 
Posted : March 23, 2017 6:23 am

(@scott-in-indianapolis)
Posts: 223
Member
Topic starter
 

Mark, I like your input about has our board ever disciplined anyone on this issue. I have often thought this myself - and the answer here is no, not that I am aware. So, why have they put this section regarding RPA (as we call it here) in the code? It has been adjusted over the last few iterations of the code, so they are watching it...but nobody does it.

I am trying to develop a company SOP that goes beyond "take a check shot at each set up - should be less than 3-4 hundredths h & v".

 
Posted : March 23, 2017 6:27 am
GaryG
(@gary_g)
Posts: 620
Supporter
 

Maryland:

G. Accuracy Standards. (1) The maximum allowable relative positional precision for boundary surveys shall be 0.07 feet (or 2 centimeters) plus 50 parts per million, based on the direct distance between the two corners being tested. (2) The surveyor shall ascertain that the positional uncertainties resulting from the survey measurements do not exceed the allowable relative positional precision. (3) If the size or configuration of the property to be surveyed or the relief, vegetation, or improvements on the property will result in survey measurements for which the relative positional precision will exceed the allowable amount, the surveyor shall add a note to a survey explaining the site conditions that necessitated the deviation from the relative positional precision. (4) The surveyor shall, to the extent necessary to achieve the standards set forth in å¤G of this regulation, compensate or correct for systematic errors, including those associated with instrument calibration. (5) The surveyor shall use appropriate error propagation and other measurement design theory to select the proper instruments, field procedures, geometric layouts, and computational procedures to control and adjust random errors to achieve the allowable relative positional precision tolerance.

If the project is large we do run closed loops. Every project is unique so we adjust and evaluate independently with the code in mind.

Every setup gets a BS check. If we had 0.02- 0.04 error on a back sight I would be worried something is wrong. Usually we average 0.004 - 0.01'

Multiple angle measurements, averaged fore sight and back sight distances. Cross ties if possible. Never had an issue yet, knock on wood, ;;;;;;;;of corners not checking within 0.005' - 0.02'.

 
Posted : March 23, 2017 6:55 am
(@scott-in-indianapolis)
Posts: 223
Member
Topic starter
 

Gary_G - so based on your input (esp. #3 & #4 of your G. Accuracy Standards), in Maryland, is it left to the surveyor to determine (1) IF, (2) when and (3) how to adjust the survey?

 
Posted : March 23, 2017 7:03 am
GaryG
(@gary_g)
Posts: 620
Supporter
 

Yep, It really allows for different methodologies to get to the .07' + 50ppm, and keeps from legislating " turn this many angles for this type of survey" type of language.

 
Posted : March 23, 2017 7:27 am
(@scott-in-indianapolis)
Posts: 223
Member
Topic starter
 

So, by your code, you could drop in some control by RTK, setup with a gun, check your back sights and foresights from the control, determine that you have less than 0.7' + 50ppm (plus make allowance for side shots from the control), and say that there is no need for adjustment. Right?

 
Posted : March 23, 2017 7:31 am

GaryG
(@gary_g)
Posts: 620
Supporter
 

Remember the code doesn't talk about control , it references "between any 2 corners being tested".

We OPUS two points all the time. Hold one fixed the other for azimuth and run our traverse. We locate our corners using the same angle and distance standards we are comfortable with for our control.

 
Posted : March 23, 2017 7:44 am
(@james-fleming)
Posts: 5696
Member
 

gary_g, post: 419861, member: 1026 wrote: If we had 0.02- 0.04 error on a back sight I would be worried something is wrong.

0.02 - 0.04 error on the back sight check and the equipment's going to the shop. If I'm still getting that after everything is checked and calibrated then the field crew is updating their resumes 😮

 
Posted : March 23, 2017 7:54 am
(@scott-in-indianapolis)
Posts: 223
Member
Topic starter
 

Gary_G, - good point about control and corners.

As I read our code in its current form - I really don't see any requirement to adjust - or even test for that matter. I am not sure how I feel about that. Code just specifies the end result requirement.

I suppose it could be argued that based on the angular and distance tolerance of the gun and length of shot, measurements will have error less than the max allowed error, so no adjustment is necessary. I definitely know how I feel about that.

 
Posted : March 23, 2017 7:57 am
(@scott-in-indianapolis)
Posts: 223
Member
Topic starter
 

James, I like your attitude!

But I guess it depend on what we are doing. Topo while holding an unstabilized rod in a corn field vs. control with legs and tribrach are very different things.

 
Posted : March 23, 2017 8:00 am
(@mark-mayer)
Posts: 3370
Member
 

Scott Bordenet, post: 419850, member: 10097 wrote: I am trying to develop a company SOP that goes beyond "take a check shot at each set up - should be less than 3-4 hundredths h & v".

Check shots will reoutinely be less than "3-4 hundredths h & v" when you institute a policy of routinely checking and adjusting tribrachs and rods, redundantly measuring points, and analyzing the results using LS. And then recording and checking the check shots at the office as well as in the field.

In other words when the boss is willing to focus his time, energy, and resources on it, the people will make it happen. You get what you measure and reward. If you are not getting what you want review what you are measuring and rewarding.

 
Posted : March 23, 2017 8:46 am

Page 1 / 2