Notifications
Clear all

GNSS RTK Accuracy

32 Posts
15 Users
0 Reactions
207 Views
(@bc-surveyor)
Posts: 228
Member
Topic starter
 

I'm trying to get a solid handle on exactly what manufacturer GNSS RTK accuracy specs actually mean and was hoping we may have an expert here that can help clear things up.

Most manufacturers quote their "accuracy" using RMS and usually with a few asterisks about being free of multipath, satellite geometry, atmospheric conditions, ect ect.

I reached out to a few of the big guys (Leica, Trimble, ect) and the ones that responded said they're using ISO17123-8. Fair enough, but in that set of standards they clearly state that this testing procedure is a means of measuring precision, not accuracy. Which makes much more sense to me but lets move on from that for now.

The ISO basically sets out a methodology of measuring two near points (within 2-20m from each other), inversing between the measured points to derive a delta Hz and delta Z and comparing those values with much more precise conventionally measured values (to within 3mm to be specific). And of course doing this many times, with ample time in between observations to allow for changes in satellite configuration & variations in ionospheric and tropospheric conditions. And as I understand it, the observations are average and a RMS is calculated off of that mean (correct me if I'm interpreting the ISO wrong please). So lets say we have a value of 1cm + 1 PPM (horizontally) and lets ignore PPM for now. That 1cm should denote that 68% of single epoch measurements should land within a mean in both the positive and negative direction. So a span of 2cm. Bump that up to 95% (which is much more appropriate for surveyors to use) and we're looking at a span of 4cm horizontally. This should just be the span at the receiver head indicative of the precision. For now we are ignoring all other sources of error such are how well the HT was measured, how well the rod was centered and levelled, the error at the base position, ect ect.

I believe the issue arises using RMS because that would assume deviations in RTK observations are gaussian, which they are not as I understand? There for this is not an accurate way to measure precision. But a better way may be to look at the span of single epoch measurements over an extended period of time.

To get an idea of absolute accuracy for that ISO standard we would need to either run it on a control network oriented to grid north and tight to 3mm & take a holistic view of all other sources of error & for relative accuracy at least compare the vectors between the base and rover points to known (as best as we can) values.

This is mostly just me thinking out loud and I could very well be wrong in several places here, I'm looking for any opinions on the matter. If at all possible, could you please link literature that supports your argument? I would be extremely thankful as it's tough to get a verified answer on this stuff. The manufactures seem to not want anyone digging into this too much as when I started asking questions I got radio silence.

One other question I had was, if I take an RTK observations of lets say 30 seconds, is the data collector doing anything more than just averaging those measurements? For example is it weighting them based on an approximation of their quality? And if so, how is that approximation calculated? I would assume using something along the lines of the geometry of satellites, the amount of satellites, there has to be other measurables too, does it know the affects of ionospheric and tropospheric conditions at every moment?

 
Posted : December 16, 2023 1:19 am
(@thebionicman)
Posts: 4450
Supporter
 

BC,

Your thoughts are spot on. The routines used to express 'quality' of RTK observations are generally abuses of statistics tainted by business risk ideas and a touch of philosophy.

I have found nearly every major manufacturer is overly optimistic by a factor of about 2.5. Leica even used to admit that in their user manuals.

Enjoy your quest and keep sharing, Tom

 
Posted : December 16, 2023 1:47 am
(@rover83)
Posts: 2346
Member
 

Fair enough, but in that set of standards they clearly state that this testing procedure is a means of measuring precision, not accuracy.

That's how it works with all measurement systems - repeatability (precision) is the only standard by which they can really be judged or certified to. Manufacturers don't know what datum or control users will be measuring, nor can they assume anything about processing and adjustment methodology. The manufacturers don't know whether you're using a fixed height with sandbags or a janky aluminum tripod in high wind with an unadjusted tribrach. All they can do is measure how well that receiver can replicate the same position at the APC.

One other question I had was, if I take an RTK observations of lets say 30 seconds, is the data collector doing anything more than just averaging those measurements? For example is it weighting them based on an approximation of their quality? And if so, how is that approximation calculated?

I can only speak to Trimble myself, but my guess is that others use similar methods.

When a measurement is initiated, there are two conditions that must be met - time and number of measurements. The first is pretty simple - the receiver must measure for the amount of time designated in the survey style.

The second condition depends on whether the user has specified "auto tolerances" or has manually set the tolerances in the survey style. If auto tolerance is selected and the operator is using a Trimble receiver, Access knows its precision specifications and will enforce those limits on the data points being collected epoch-to-epoch. So if you are set to collect 30 epochs for an observation with an R12i, and after 10 epochs the data points begin to fall outside the default tolerances, it will throw that "poor precisions" or "position compromised" warning, and prompt you to either store (using the "good" in-tolerance positions) or remeasure.

If the user chooses to define tolerances manually, Access will use those rather than the receiver specs.

The position quality you see at the top of your screen while walking around withthe rover is an estimate only - the proof comes when you initiate that measurement and it starts comparing epoch-to-epoch. That's why the Rapid (instantaneous) measurement method can be so dicey - the quality of that observation is an estimate only, because it's a single data point without anything else to compare it to.

does it know the affects of ionospheric and tropospheric conditions at every moment?

From what I understand, the top manufacturers do incorporate ionospheric modelling into RTK engines, but usually all they can do is use a predefined model. Some budget receivers have no modelling at all.

Tropospheric effects are weather dependent, so practically speaking impossible to model...

(Edit to add: when I choose to display position quality at 2-sigma level, I rarely find it to be overly optimistic if I am working in decent GNSS conditions. My experience is that newer receivers are less optimistic than in the past. Better RTK engines with better and faster real-time testing. The biggest problem with RTK observations is that they are rarely separated by enough time to get a different constellation, and that by far is the largest source of correlation and subsequent over-optimistic estimates of positional error.)

 
Posted : December 16, 2023 2:00 am
(@bruce-small)
Posts: 1508
Member
 

Rather than comparing what the manufacturers say why don't you ask for a demonstration of each at a calibrated baseline. I have taken my Leica to the Tucson baseline and the results were impressive (using a bipod and repeating each shot five times so the unit uses the average, which is my standard procedure for a control point).

In my opinion repeating with a different constellation has not been relevant for years.

 
Posted : December 16, 2023 2:46 am
(@bc-surveyor)
Posts: 228
Member
Topic starter
 

It's hard enough to get a reply let alone a site visit but that would be ideal.

 
Posted : December 16, 2023 10:14 am

(@olemanriver)
Posts: 2454
Member
 

What has been stated by you and others are very well articulated. I am not so sure my mind can appropriately get the words out but i will give a few bits. Absolute Accuracy for me with GNSS is about absolute positioning. In other words anywhere on the planet earth how accurate can I position a point. I guess our accuracy on the Land Surveying side would be to the Datum itself or to any standards set like the actual distance defined. So the calibration baseline that has been certified. If the great almighty leader declared a new unit of measurement like to define something like us foot or international foot meters (been actually a few different ones of those as well). Then that would determine a standard in which we shall call truth accuracy in which we should measure to. Unfortunately I know a few states regs that for the total station edm its fine to just take it in for a calibration no need ti go to a baseline in which goes back to the statements above. We have precision not accuracy unless we test it against the truth. I can remember having an invar tape and rods that were certified and as a USMC geodetic surveyor taking our steel tapes in and getting them checked. Which meant at different readings on that tape we had an extra correction to make besides temp swag tension etc.

For someone who has surveyed and also spent time doing GPS orbits and have literally watched and monitored the gps constellation 24/360. If you want accuracy you must take into account the constellation change period. Yes we have multiple constellations now not just one. Our predictions on the gos constellation have become superb. I also have a decent handle on other systems glonass etc . Accuracy comes from that constellation change . I get tickled looking at rtk observations and then traverses. Someone goes out VRS a point twice back to back 30 seconds or 5 seconds or 180 seconds epochs and say man my X brand is within x. Then traverse doesn’t close. I go out same exact points use c brand. Observe 180 epochs wait 4 hrs observe again.my spread is more. Take and adjust those apply same traverse data bam it closes. Lets say you set one pair on one end of a traverse. By the time you get to the other end its been 4hrs it could be two or 8 you position those. So first pair repeats and looks good 2nd pair looks good but do they relate to each other. Only by the stations that are in the network rtk operating stations. If you were in a different triangles you have the station error themselves. NGS CORS Use to only update a position once it gets about 2cm outside the value. They may have tightened that up I haven’t asked in several years. Time and redundancy is your friend with rtk base and rover and VRS or Smartnet leica topnet etc. leica pushes the 30 second thing really hard vs trimble having the 180 epochs. Not saying you can’t get and be good at 30 seconds. But A lot can go wrong in 30 seconds that cannot be caught. I do vrs base rover a lot. I use robot to test not just between two points but third or ev4 th points by various methods. For Trimble which is what we use and i have used leica in past. All my personal test for me for control and boundary corners gets a minimum of 180 and a moved base location with minimum of 1 hr but thats not really reality because i am usually pushing 4 hrs doing mapping or something else on that job so usually not done in an hour. But a 2nd observation. I truly like 3 but job scope and such doesn’t always allow for that.

Troposphere modeling is possible to an extent. But one must have a large enough network area coverage or being fed into the network rtk system by other means. Trimble did this quite well. Geo++ i saw not so good results as the weather changed aka a storm moving across the state. Now geo++ did some other amazing things the Trimble couldn’t do as well or at all. But that’s been in early 2006 -2008 time frame when we operated both systems simultaneously from same stations. And a third but it was not ready for prime time.

Now i say all of that and this past week i was looking for control around some sediment ponds. In mountainous area. I simply took r12 out no base no vrs. Just raw measurements and was doing navigate to point and hit 12 points less than .3 ft hz and about the same vertical on a job. I had wanted to just get withen 10 ft to grab mag locator. But when the tip of the rod hits an iron pin with cap that was a few tenths below surface and no mag locator required was blowing me away. And we do not have the subscription for that centerpiint x or whatever but wish we did. Its not even cked in our survey style . I think we are closer than we can imagine to not needing corrections period. But a lot of money to be made with subscriptions to networks and having a base and rover. Lol

 
Posted : December 16, 2023 10:21 am
(@bc-surveyor)
Posts: 228
Member
Topic starter
 

So I went out into the field today to start doing a bit of testing. I went out and observed two points simultaneously using a nearby rover and logged positions at 1 second epochs for just over 3 hours.

I'm actually pretty surprised but what I'm seeing so far.

I haven't run conventional ties between the two points yet so bare with me on that but I will update when I get back out there very soon.

For those 10,000+ observations at each point I'm seeing a standard deviation of about 0.006' in either horizontal axis and 0.015'-0.016' vertically. With my largest outliers of about a tenth in horizontal and 0.15' vertically, not bad for 10,000 measurements. When I start grouping observations into 5, 30, 60 second observations those numbers tighten up a lot. And when I plot the coordinate observations they sure do look like they are following a normal distribution curve. I did a bit of reading and it looks like my initial assumption that GNSS observations are not Gaussian wasn't quite accurate (source: https://www.gpsworld.com/gpsgnss-accuracy-lies-damn-lies-and-statistics-1134/)

I was basing my initial thoughts on a video about RTK observations where the surveyor ran a similar test but got very different results (source:

When I graph any of my three axis they do not look anything like what he is graphing. My graph is centered on the mean and doesn't trail off in any direction, stay there for a period of time, then make its way back. It's very well distributed around the mean with some noise. My normal curve verifying this.

Any ideas why we are getting such different results?

 
Posted : December 16, 2023 10:27 am
(@bc-surveyor)
Posts: 228
Member
Topic starter
 

Thanks for the reply!

Everything you said has been echoed to me before by experienced surveyors such as yourself so I am not doubting it. Just relaying what I'm seeing so far from this small data set (more data to come)...

Based on these initial findings I'm seeing a very minimal change in accuracy between 30 second observations and 180 second observations.

And my positions wasn't changing significantly in any repeatable direction over a span of 3 hours, which when I look at the skyplot at my location via gnssplanning.com, the geometry of the available satellites changes significantly.

Any thoughts on this?

I'm going to repeat this test on a different day with our other brand of & also repeat it on two rover points both semi obstructed from the sky and fully obstructed. Then two more times when connected to a network via single baseline and network via VRS.

If anyone has any suggestions on additional tests the think I should run please let me know.</font>

 
Posted : December 16, 2023 10:37 am
(@olemanriver)
Posts: 2454
Member
 

Hey take that same data set. Grab 1 epoch at each hour mean it compare it to the mean of all 3hrs. Then take 10 epochs at each hour mean it compare it to both all data and the 1 epoch. Do 30 60 180 then repeat at half hour etc. set parameters of pdop rdop gdop vdop for the sample epochs. Do this again same spot shift the time so end of the 3 hrs do again. Combine the all epochs for first 3 hrs and the 2nd 3hrs compare all above. You could do this for a full 24hrs and the truth will usually lie around the 2hr to 4hr time frames. On average.

I would have to really dive end to your data set. But here are some things that are not often discussed like multipath is. Delts V. This is where the satellite is moved burst back into where it’s supposed to be vs where it actually is. It drifts. Unfortunately we are all taught no gravity in space but its just a small amount. At times unknown to us each satellite at minimum is sent a message to relay to us. This in simple terms is stop saying you are here and this is what time it is now say this is where you are and what time it is oh and how fast you are moving. All satellites are not given this message at the same exact time. Some might get this more than once a day. All get the new ephemeris everyday along with other information. This has proven to improve the user range error. So instead of measuring points on the ground with an uncertainty in its position we measure where the satellites are. Timing is another. Clock drift in satellites themselves the monitoring stations that relate them to us. The receiver you and i use clock is not that accurate. Now multiple ways to handle this issue to an extent for us or the manufacturers. It can be as simple as in a base rover setup of canceling out algebraically. Or modeling from both base rover and establishing an constant etc. a few more but they all have pros n cons. If we could solve for time in real time in the field more accurately we would be golden. Time seems to win in almost all cases throughout history. Rubidium mazers and other atomic frequencies standards all have unique strengths and weaknesses in timing. Of course cost and longevity as well. Satellites monitoring stations and at the user end. We all have time to deal with.

 
Posted : December 16, 2023 11:13 am
jhframe
(@jim-frame)
Posts: 7282
Member
 

"if I take an RTK observations of lets say 30 seconds, is the data collector doing anything more than just averaging those measurements?"

When I posed this question to the Javad engineers, the response was that the predicted error for each epoch were considered in the result.

Modern RTK results in perfect conditions are pretty reliable. It's when the conditions are less than perfect that you have to be careful.

I'm fond of the Javad Triumph-LS display, which shows where each epoch is landing. In good conditions they group pretty tightly, in challenging conditions the spreads get looser. With the vertical, in particular, if the epochs trend up or down instead of varying around the average, you can tell that the result is not going to be very tight even after 240 epochs (which is what I normally do for control).

Probably most important of all is to be familiar with what your equipment does under what conditions, and that's just a matter of experience.

 
Posted : December 16, 2023 11:25 am

(@olemanriver)
Posts: 2454
Member
 

Jhframe. I have not yet been able to use a javad. But have talked with people who work for javad and or distribute it and use it personally. Javad i have no doubt probably has some of the most stringent algorithms and methods for testing before accepting. When the man created the ashtech z models the proved to be some of the most sophisticated and durable receivers on the market and i will have to be completely honest i love Trimble but those old receivers for geodetic absolute work derived some of the best data. I think my geek side would love to use the Javad system but i know I would probably never get anything done work wise as i would probably push it to its limits and get so caught up in tweaking things beyond needed for every day work. I will at some point probably get one and test it for a week or so. Trimble is great for reliability and accuracy and production in many different ways for many different tasks. I know it and know the leica and novatel. I know which one is truly more accurate and precise but thats not all that it takes. So i use trimble because thats what we have and can make the field to office flow well with my limited abilities. Novatell recievers are probably better than trimble in many situations but neither of them meet all the requirements listed for use so understanding that is why I develop field and office procedures to catch that 5% failure under different circumstances. The javad i have only seen results back years ago in a true testing environment along with many others. It soared above all the rest. In almost all cases. The man was a genius. A true scientist. Not a marketing wiz. His passion was seen through his creations for sure. He saught the most accurate and reliable in all circumstances. Thats why I think those solo folks once seen it work grabbed it and held it close . Its a unique system outside the box no pun intended. If all i was doing was setting rtk control and rural boundary work thats all i would own honestly. No other equipment needed maybe a steel tape. I honestly might buy a cheap total station is all but even then I might just rent one when needed. Now the r12 is catching up in those hostel environments and i am getting amazing results in situations i shouldn’t be getting results. I set 3 points in nasty area all able to be seen between and two more in wide open i could see into two of the 3. 3 observation 2 first day 4 hrs apart 1 2nd day at the 2hr mark between 1st day all different base locations. I had .03 hz vt error after comparing robot results measured rounds 4 with traverse kit running through them all and cross tied where I could. I get better than that a lot of times but this was flat out bad gps area . At a power plant under main power lines and tall metal buildings. Next like literally to power grids like 50 ft. Now my base stations were set outside in good areas just the control for mapping set in the bad areas. I did some shots to the building corners with r12i and robot reflector-less as well. Those creeped up closer to a tenth when compared independently. Rtk has become the tool of choice. By my crews and they seem more productive in most cases even when re observing control and bldg corners twice. For the mapping we used the robot as some items were located inside the buildings themselves . I imagine those in the midwest and such hardly even need a robot or total station with open skies. Here on east coast lots of trees from pine to holly magnolia are the worst. Oaks hickory poplar leaves don’t seem to be as bad except after a rain when leaves are wet Or from a heavy dew once the hardwoods leaves dry it tightens down pretty quickly. I do wish i had more info on to look at like javad gives. But i can barely get crews to monitor the sn and azimuth and elevation of sats rms during the minimum 180 epochs. . So getting them to watch all the other javad gives might be difficult. Lol. If i am observing personally i watch and might remeasure or stop and wait and the. Start agy. Seeing a satellite go out of horizon or one coming in in middle of an observation I usually will not hold that as gospel until i have more information. Topo is a little different as i get into production mode.

 
Posted : December 16, 2023 12:35 pm
(@rover83)
Posts: 2346
Member
 

I was basing my initial thoughts on a video about RTK observations where the surveyor ran a similar test but got very different results

I didn't see a histogram in the youtube video, but looking at the time series plots I would bet it would have been more or less Gaussian if they plotted it out that way.

They may have only been observing one or two constellations, which could account for that data wandering a lot more than yours. Their rover was also 4 miles from the base station - how far was yours? They also only gathered about 15 minutes of observations, and who knows what kind of receiver and what RTK engine they had running.

It's tough to compare these sorts of experiments unless you have the exact same setup and under the exact same conditions. Like Jim mentioned, RTK results with newer receivers in open sky are darned reliable these days, but the proof is in how they react under adverse conditions.

Another thing to consider is that for most RTK surveys, the rover is seeing continually different conditions as the operator moves around the site, regularly dropping & adding SVs and frequencies, losing initialization and regaining it. When we set up a receiver in ideal conditions and proceed to log tons of data at one single location, we're not really replicating what happens during a typical RTK survey.

Take a look at this excellent paper from ODOT concerning the incorporation of NRTK observations into control networks:

https://www.oregon.gov/odot/Programs/ResearchDocuments/SPR304-821_UpdatedSurveyStds.pdf

It's focused on RTN observations, but is probably the best and most recent analysis of how long to observe and how long to wait before repeat observations. I think they are a little conservative, but then again a lot of my experience is with base-rover and full-constellation RTN, and they are trying to replicate static-level results. Regardless, there is a lot of good info there...

 
Posted : December 17, 2023 4:40 am
(@bc-surveyor)
Posts: 228
Member
Topic starter
 

"Another thing to consider is that for most RTK surveys, the rover is seeing continually different conditions as the operator moves around the site, regularly dropping & adding SVs and frequencies, losing initialization and regaining it. When we set up a receiver in ideal conditions and proceed to log tons of data at one single location, we’re not really replicating what happens during a typical RTK survey."

I completely agree, I cant think of a straight forward way to incorporate this factor into reliable testing and observe enough data to make the test as sounds as possible, so I'm all ears if you have any suggestions?

Thanks for the link, 108 pages long... I know what I'm doing for the rest of my Sunday.

 
Posted : December 17, 2023 5:48 am
(@olemanriver)
Posts: 2454
Member
 

Well we can test. Through redundancy and building in independent checks but for topo that goes out the window no one is going to shoot a bunch of ground shots twice. I do however every so often put a dot on the ground or x with paint years ago when initialization were not as good as they are today would carry old golf Tees stick it the ground flush with surface and call it a ground shot. Before map screens on data collectors. I kept a pocket field book and whenever i lost initialization would go back to that point for a check. Back then if you lost initialization all points in that time frame were suspect. Now i just navigate back to a point or after swapping batteries or when job is done zig zag my way back spot checking. The reality is it has become so good that 95% of the observations are probably within tolerance and therefore meet any state standards in that regard. I just build checks. I use a vector spreadsheet in office to identify any suspicious points. I catch a few and usually can see the rms or hz vt precision ramping up to the worst point. I had to send one crew chief back to a job he was storing a lot of bad data just was not paying attention at all and overwriting the tolerances. He was having a bad day. I looked at him and said. I don’t care how or when but you have 8 hrs left in the budget. Get it done. He did. And he spent more than 8 finishing the job and correcting his blunders. He admitted he didn’t pay attention had some issues going on the week before. I can remember a sokkia total station and when the battery was getting low it made a distinct sound and rhythm. You could get some weird crazy distance to a point like thousands of feet off. Had one one time literally just for several shots no matter the angle or distance just had the same exact distance every time. So we learned and was intune with the equipment so we usually caught that. No difference for gps. Set up redundancy. Spot ck every so often and just roll with it . It’s probably fine in ideal conditions. In canopy i am more Leary but its not supposed to be good there. I am not afraid to use it for control In canopy because those redundant observations at a gap in time might save me hours of traversing and cutting line if it works. If not well off to traversing. I have had 3 observations per point in canopy for many points and i bet less than 5% were bad that i needed to disable those vectors. What i see is i need 22 sats minimum good geometry pdop vdop etc and just let her burn. I kept a log early on when testing the R12i myself. 98% of the time if it got 180 epochs like clock work in the 3 minutes it was good enough to pass alta specs and good enough for property corners. If it struggled then it was suspect same if I dropped below 22 sats. This all was me on several projects i did on my own time mostly and comparing to robotic traverse . Now i do build a network and perform a least squares on all projects. Which can expose some problems as well.

 
Posted : December 17, 2023 6:26 am
(@robertusa)
Posts: 372
Member
 

Looking at Oregon DOT’s summary, I don’t like their 5 minute observation time. It’s not static data, so once precision numbers stop decreasing a longer observation isn’t needed. In fact, too long a RTK observation can degrade if satellites get into worse geometry or you lose them from obstructions.

 
Posted : December 18, 2023 6:30 am

(@steinhoff)
Posts: 132
Member
 

Given that NGS NOS 92 calls for 5 minute observations when using NRTK for secondary and local control, I'd say that Oregon's DOT standards are apt. I generally instruct crews to collect 3-5' redundant observations for projects I run.

If anything, just run a RTK-LOGGING survey style. If the crew ends up on a point longer than you think is necessary for RTK, just postprocess baselines and disable the RTK vectors.

 
Posted : December 18, 2023 7:28 am
(@rover83)
Posts: 2346
Member
 

In fact, too long a RTK observation can degrade if satellites get into worse geometry or you lose them from obstructions.

If that were true, static observations would be "worse" than RTK observations because they are "observed for too long". All GNSS observations bounce around as geometry changes. That's the point of setting those time windows long enough to capture that movement, as well as returning under a different constellation. We have no idea when the end users of this control will be observing, after the values are published, and so we need good network accuracy.


I don’t like their 5 minute observation time. It’s not static data,

Indeed, 5 minutes is not a traditional static observation. The entire point of the study was to investigate what it takes for NRTK observations to be comparable with traditional static observations so as to improve efficiency when running control networks. Several other studies (check out the Allahyari and Weaver papers referenced in the appendix) have found that the ideal window is somewhere between 3 and 5 minutes, especially for vertical. Not to mention that not all classes require 5 minutes, just the higher-order ones.

 
Posted : December 18, 2023 8:03 am
(@olemanriver)
Posts: 2454
Member
 

Yep you nailed it again. Bare minimum 180 3 minutes for my control and boundary corners. Period anything less go back and so it again. I have tested NGS guidelines for base n rover and vrs or nrtk. It freaking works. Falls withen what they state most of the time.

I did that static job you helped me with not long ago. I followed to the T ngs guidelines for gps derived heights. Even when pressure from the top disagreed. Well it worked a minimum of 2 4 hours observations. Held one BM 1st order at first checked to all dini run levels and to 2nd 1st order BM. Yep we hit the requirements like a champ. Found 1 cors station with a huge vertical issue and reported that and as I guessed the antenna had been changed not updated. What’s great about it the client said we want yall on the contract now. No one else had met the requirements after the combed through all the reports of everyone.

The more darts or epochs you throw at the dart board or point. The more data we have to eliminate outliers and derive at a more confident solution.

 
Posted : December 18, 2023 9:55 am
(@olemanriver)
Posts: 2454
Member
 

I like the way you think. In the big picture 5 minutes is nothing. It takes most people longer than that to set a total station up over the point not to mention you might have to set up several times to even get to that point from where you started. In that 5 minutes one can do a lot if they don’t want to look at the dc. I will flag up check for other evidence. Make a sketch. At one time light a cigarette. Don’t do that anymore. Now go find a tree and water it. Lol.

 
Posted : December 18, 2023 10:16 am
(@picho)
Posts: 2
Member
 

<font style="vertical-align: inherit;"><font style="vertical-align: inherit;">Y QUE CUAL ES LA DIFERENCIA ENTRE EXACTITUD Y PRESICION , CREO QUE ES LO MISMO</font></font>

 
Posted : December 23, 2023 11:25 am

Page 1 / 2