Could someone point me in the right direction to be able to show our confidence % using rmse (not sure how to calculate rmse currently from our work flow)
Do we need to set the controllers to do averaged points on a least 20 shots? How long?
This is for a base rover rtk gnss setup that sometimes will be tied to trimble rtx and other times burning an opus file.
We use Carlson Survey and am hoping this is something that Carlson can calculate in a report? Is there an adjustment that takes place after rmse is calculated? Short talk with Carlson tech support and they said SurvNet should do everything we need but we need to be processing the raw files instead of exporting csvs?
This is kind of new to us, so please don't roast me too hard, just trying to learn and do better.
We are looking also into getting a Lidar drone and a few courses I have taken, they are all about rmse and 95% confidence.
It's complicated, and not at all intuitive. Glad you're trying to learn, but without some foundational knowledge of stats, the potential for a blunder is high.
Start by watching some YouTube or reading a bit about the relationship between a given data set's mean and standard deviation. RMSE is equivalent to the 67% confidence level or 1 sigma. When you can explain the difference between 1 sigma, 2 sigma and 3 sigma, you're on track to being able to assess the quality of your data.
Look up some of my posts from the last six months or so, and you'll find a spreadsheet that allows you to copy and paste coordinates into it to calculate 2 sigma confidence. Or, just ask a top tier AI to create a similar spreadsheet.
It's complicated, and not at all intuitive. Glad you're trying to learn, but without some foundational knowledge of stats, the potential for a blunder is high.
Start by watching some YouTube or reading a bit about the relationship between a given data set's mean and standard deviation. RMSE is equivalent to the 67% confidence level or 1 sigma. When you can explain the difference between 1 sigma, 2 sigma and 3 sigma, you're on track to being able to assess the quality of your data.
Look up some of my posts from the last six months or so, and you'll find a spreadsheet that allows you to copy and paste coordinates into it to calculate 2 sigma confidence. Or, just ask a top tier AI to create a similar spreadsheet.
Thank you so much, I will start looking at your old posts.
A scientist buddy of mine once quipped; "95% confidence level is a wild ass guess".
Simply put it's a range (ellipsoid usually in surveying) where you can expect the point to be inside of 95 times out of 100.
Expand the range and you can approach 100%.
Rather than statistics focus on redundancy.
Let the statistics help "prove" your valid points with redundant data.
After reading more up on this, I have (with the help of AI) come up with a rough process. Could ya'll help adjust this process where needed please?
Field Workflow:
- Set anonymous base CP1 and start logging for OPUS-RS (1 hour minimum) (NO RTX)
- Start topo work (averaged 20 epoch for each shot ~15s a shot) (20 epoch reduces blunders)
- Minimum 20 CP’s disturbed evenly through project, need a shot tied to current survey. Without moving rover, take another shot but switched over to raw logging and occupied for 15min minimum. This should create a raw file (t02) for each CP (this is important to be able to calculate 95% confidence)
- Note here: only 3 CP’s have to be physically set, but to meet the FGDC/NSSDA standards, 20 cp’s have to be used to calculate 95% confidence.
- Minimum 20 CP’s disturbed evenly through project, need a shot tied to current survey. Without moving rover, take another shot but switched over to raw logging and occupied for 15min minimum. This should create a raw file (t02) for each CP (this is important to be able to calculate 95% confidence)
- When topo work is finished, end survey.
- Submit data to Office for processing.
Office Workflow
- Convert & run OPUS on the base
- Convert the base .t02 to RINEX (Tools ► RINEX Converter).
- Upload the RINEX file to OPUS-RS and save the report (NAD 83(2011) E/N/Z + σ).
- Build the SurvNet job
- Create a new project; set units & projection.
- Import raw RINEX files (not OPUS-processed):
- Base RINEX (raw).
- Twenty rover RINEX files from the 15-min checkpoint sessions (raw).
- Enter CP-1 with the OPUS coordinates and mark it Control (Hold).
- Add the RTK field data
- Import the rover’s RW5/RWX (or LandXML) that contains all 20-epoch RTK shots, including the 20 checkpoints.
- Adjust the network
- Select Process ► Adjust Network.
- Verify the report: global variance factor ≈ 1; no residual exceeds ± 2 sigma.
- Generate 95 % confidence statistics in SurvNet
- Set Confidence Level = 95 % (Settings ► General).
- Run Reports ► Coordinate Differences / Accuracy Stats:
- Choose RTK coordinates as Test Points.
- Choose PPK checkpoint coordinates as Control Points.
- SurvNet outputs Horizontal & Vertical RMSE (1 sigma) and automatically scales to CE95 (× 1.7308) and LE95 (× 1.9600).
- Output
- Save the SurvNet Adjustment Report (ellipses, RMSE, CE95/LE95).
- Export the shifted coordinate file (CRD / LandXML / DWG); all topo points are now on the OPUS-anchored NAD 83(2011) frame.
- Archive the OPUS report, SurvNet report, and final coordinate file.
Other Notes:
If a surveyor doesn’t tie the project to OPUS (or any other NSRS control) the only way to make an honest “95 % confidence” statement is to —
- run a fully redundant, least-squares–adjusted control network within the project, and
- verify that network with independent check-points whose coordinates come from a method demonstrably more accurate than the data being certified.
If they skip the independent check or use the same RTK stream for both “truth” and “test,” the number they publish is only an internal precision figure, not an FGDC-compliant 95 % positional accuracy.
General Workflow without OPUS:
- Control network
Run a closed traverse or dual-session GNSS loop; adjust in SurvNet; hold one monument fixed. - Topo & detail
Use RTK or total station referenced to that adjusted control. - Independent checkpoints
Observe ≥ 20 ( FGDC ) or ≥ 30 ( ASPRS 2023 ) points with a higher-accuracy or clearly different method. - Stats
SurvNet: Reports ► Coordinate Differences → RMSE → CE95 / LE95 or RMSE classes.
For a dual session GNSS loop, basically run the same survey (just controls point loop) 3 hours apart. 3 hours apart because by then it gives 35–45 ° of satellite-sky rotation—roughly one satellite has risen or set. Most DOT specs adopt a 3h or next-day rule.
This is the statement our state is now requiring on all topo surveys and we are just trying to be compliant.
" I, ______________________, certify that this project was completed under my direct and
responsible charge from an actual survey made under my supervision; that this_______________
(insert as appropriate: ground, airborne or spaceborne) survey was performed at the ___ percent
confidence level to meet Federal Geographic Data Committee Standards; that this survey was
performed to meet the requirements for a topographic/planimetric survey meets the Oklahoma
Minimum Technical Standards for the practice of land surveying as adopted by the Oklahoma
State Board of Licensure for Professional Engineers and Land Surveyors. The original data was
obtained on _____(date)__________; that the survey was completed on ___(date)_______; that
contours shown as [broken lines] may not meet the stated standard; and all coordinates are based
on____________________________ 'NAD 83' and realization (date of adjustment of coordinate
system) or 'NAD 27' and all elevations are based on ________________ (NGVD 29, NAVD 88,
or other)."
That control cert is wild.
The state board concocted that thing up?
Good grief. What a statement. Meet FGCD yet technical standards blah blah blah. Maybe I missed something but for fgcd it takes a lot more to meet those standards. The NGS pub 92 aids greatly in guidance on meeting those standards.
Yes this is want we were provided this year at the conference, we were thinking the same thing. One thing that seems over kill is the requirement of a minimum of 20 control points and the redundancy for shooting each from those FGCD standards.
I'm guessing this is what they are after, with the CE95 and LE95 being what the processed data gives:
I, ________________, certify that this project was completed under my direct and responsible charge from an actual survey made under my supervision; that this GROUND survey was performed at the 95% confidence level, achieving a horizontal positional accuracy of 0.10ft (CE95) and a vertical positional accuracy of 0.40ft (LE95) to meet Federal Geographic Data Committee National Standard for Spatial Spatial Data Accuracy; that this survey was performed to meet the requirements for a topographic/planimetric survey contained in the Oklahoma Minimum Standards for the Practice of Land Surveying as adopted by the Oklahoma State Board of Licensure for Professional Engineers and Land Surveyors. The original data was obtained on April 21, 2025; the survey was completed on April 24, 2025; contours shown as depicted in the map legend may not meet the stated standard; and all coordinates are based on the Oklahoma State Plane Coordinate System, North Zone, NAD 83 (2011), derived from Global Navigation Satellite System (GNSS) observations processed through the National Geodetic Survey’s Online Positioning User Service (OPUS-RS), and all elevations are based on NAVD 88.
Seems very excessive....
I don't do this type of work but was always cognizant to not certify myself to death. Could we see some accepted certs that are currently used?
This is a link to our topo/plan minimum standards with some examples:
This is a link to our topo/plan minimum standards with some examples:
A metes legal description is OK? Almost no bounds calls?
Chefs kiss is SF to 1/100trillion
You should read through the 2023 ASPRS standards. There's some solid advice in between the more technical portions of the document. I would never view the thirty checks as optional as to do so when one is aware they're new to UAS surveying would be the definition of hubris. Verifying accuracy is what separates surveyors from folks who like to fly drones and play on their computers. When taking the thirty or more checks, try to prove the lidar or photo mission right. Meaning, take the checks on the flattest hardest surface you can find, not in places where the drone data is likely to be poor like briar choked ditches, tops and toes etc.. When you first experiment with your drone, it's great to go to a site you've manually surveyed and get a feel of the inaccuracies, just don't do this with your mandatory ASPRS checks.
NC's certs are ridiculous too and somewhat insulting when you consider that many states use a boiler plate, "I certify that this survey conforms to the standards of practice per...." That said, you should also include metadata which is easy to add once you make a template. It should contain the height above ground distance and the sensor or camera make and model.
Thank you everyone and I'll read through the ASPRS. My main concern right now is that we are meeting min standards on our topo surveys.
To meet those standards be sure your GPS data is tight, you don't need 10 hours per point on different days. You can occupy pt#1, collect data for static on it as a base point, RTK pts2&3, relocate them a second time, it's best to move the base to pt2 or pt3, or simply break the base set-up re level restart it on pt#1 and check 2&3 again. You should achieve 95% easily.
Do your topo using a 4-wheeler, a drone or walking, whatever works. Check the topo, if you use a drone ride your 4-wheeler across the site collecting shots, if you use a 4-wheeler, then use the receiver on a pole to collect 20 shots. check to be sure your base HI is good, and your set-up is over the point by at least breaking it. And do multiple locations to the control, locate when you set up and just before you pick up.
As far as the examples given, not very impressive, the coordinates don't seem to match up with the Lat, Long using TBC or NCAT. Maybe I'm missing something there; the boundary cert is simplified and why that won't work for control or topo is a mystery to me, what a convoluted mess each of them are. Frankly, I wouldn't sign them. But, I don't practice there, good luck. This stuff isn't difficult, think of first principles and how to protect yourself in the off chance someone claims negligence.
And a one-sigma bound is a tighter requirement than a two-sigma bound. If you believe that (hint: it's true) and can explain why, then you'll be hailed as an expert at your next gathering.
@mathteacher I will give her a go. Precision. 2 sigma has a wider gathering and therefore in theory more precise data falls within a certain range. 1 sigma less data falls within that range 68% approximately vs 95%. Now we can be precisely wrong so even if we went to 99% the data can repeat and fall within those constraints but be precisely wrong Blunder or systematic error not accounted for. But it all boils down to that old phrase there is LIES DAMNnn Lies and the statistics lol.
We repaired an old steel tape once and forgot to account for the 1 ft difference after it was patched back together with a piece from another steel tape. Man we could measure all our houses and buildings closed well. Yet around a foot between our old total station shots to the taped distances. Yeah pop ribits of tapes together were a good skill to learn back then but Degnabit when you piece and forget to account for that foot that was missing. We had a lady that use to sew up our old rag cloth tapes yeah under 50’ we were off a foot after the 50’ mark off two foot as she would splice old tapes together with her sewing machine from old ones. Gotta love saving trying to save a nickel but spending a dollar back then. Lol.
You're pretty much spot on. There are two errors that you can make: 1) Reject good datal or 2) accept bad data.
A 95% test lowers the chance of rejecting good data, but it increases the chance of accepting bad data when compared to a 68% test.
Assuming that a surveyor wants a reduced chance of accepting bad data, the 68% test is the more stringent.
Speaking from your experience, have you ever heard someone say that his data failed the Chi-squared test at 95% but passed at 99%? That 99% test increases the acceptance region for data and indicates a preference for accepting bad data over rejecting good data.
There's always, always a trade-off between the two types of errors. Reducing the chance of one increases the chance of the other.
The only way to reduce the chance of both simultaneously is to increase the sample size; i.e., increase the number of observations. In my unwashed opinion, a minimum of 20 observations is a very low standard. Now, a lot, a whole lot, depends on the inherent accuracy of the process, including both machine and human, but I think that the threshold for escaping small sample-size statistics is 30 observations.
Mine is a theoretical world while yours is a practical world, and I wouldn't for a minute question what works. Both worlds are probabilistic, though, so errors are going to occur in both. The trick is to minimize the error type that's most damaging to you.