Notifications
Clear all

CALIBRATION FOR DUMMIES

29 Posts
9 Users
0 Reactions
7 Views
(@shawn-billings)
Posts: 2689
Registered
 

I like being able to set the scale factor, forcing it to the combined factor for a site, removing observational bias. By using the observed difference, the errors in target system coordinates and source system coordinates will affect the quality of the results. By forcing to the geodesy, observational errors are removed (related to scale).

If a calibration is "always [to] be done using static data" then it's useless for transforming coordinates from a precise, but arbitrary coordinate system, such as geo-referencing a total station survey, for example, determining the relationship of a survey provided in 5000, 5000 to State Plane.

 
Posted : April 29, 2016 4:27 am
(@mightymoe)
Posts: 9920
Registered
 

Shawn Billings, post: 369964, member: 6521 wrote: I like being able to set the scale factor, forcing it to the combined factor for a site, removing observational bias. By using the observed difference, the errors in target system coordinates and source system coordinates will affect the quality of the results. By forcing to the geodesy, observational errors are removed (related to scale).

If a calibration is "always [to] be done using static data" then it's useless for transforming coordinates from a precise, but arbitrary coordinate system, such as geo-referencing a total station survey, for example, determining the relationship of a survey provided in 5000, 5000 to State Plane.

Shawn, I have to respectfully disagree, calibration is a rotation and translation, comparing results of a geodetic survey to a plane survey.

There is an arbitrary coordinate system that you have been given or "fixed" in place, there is no metadata for it and you have to "get on" it. The best way is to occupy the control and run static sessions. This will allow you to determine your LLH to a level of confidence not available to RTK sessions, then you apply the calibration using those LLH and the given NEZ, you will then have the "best" calibrated file you can get using what is probably not all that accurate a control network, you can also compare actual distances and angles between the two "systems".

A calibration can be done without even having field data, say a vertical calibration can be done using record data and the geoid model, you don't have to leave the office to calibrate a site that way.

We were actually taught in seminars to calibrate a static survey file for field use, this of course was back in the day when good verticals were not possible using GPS and you had to fudge them.

The main reason to use static data to calibrate is that it will be the best calibration possible and it only needs to be done one time.
I'm not talking about sending it to OPUS, I'm saying you put out two, three, four, five receivers, whatever you can, run the static, reduce it right there in the field and do the calibration. Doesn't take long, presumably the points are close, 10 minute sessions are plenty.

 
Posted : April 29, 2016 5:04 am
(@shawn-billings)
Posts: 2689
Registered
 

In some ways, I suppose it depends on perspective. If you are attempting to determine a transformation from assumed>geodetic, it's probably a good idea to use the observed scale factor. This would adjust any linear error from the conventional survey to fit the (likely) more accurate GNSS positions. If however you are attempting to transform from geodetic>assumed then I think it's probably best to use the geodetic factor. This would be for those cases in which a surveyor wants to work in the 5k,5k system he started a few years ago on a project and cannot afford to transform to SPC (et al) now. In this case, if the user allows the observed scale factor to be used, then all of his (likely) more accurate GNSS positions will be degraded by the inaccuracy of the total station derived positions. In other words, a foot will no longer be a foot. It will be a Smith job foot.

 
Posted : April 29, 2016 6:31 am
(@mightymoe)
Posts: 9920
Registered
 

Shawn Billings, post: 369983, member: 6521 wrote: In some ways, I suppose it depends on perspective. If you are attempting to determine a transformation from assumed>geodetic, it's probably a good idea to use the observed scale factor. This would adjust any linear error from the conventional survey to fit the (likely) more accurate GNSS positions. If however you are attempting to transform from geodetic>assumed then I think it's probably best to use the geodetic factor. This would be for those cases in which a surveyor wants to work in the 5k,5k system he started a few years ago on a project and cannot afford to transform to SPC (et al) now. In this case, if the user allows the observed scale factor to be used, then all of his (likely) more accurate GNSS positions will be degraded by the inaccuracy of the total station derived positions. In other words, a foot will no longer be a foot. It will be a Smith job foot.

Maybe we are discussing two different things, in Trimble which is what the poster uses, it is totally irrelevant when doing a calibration if the data is static or RTK, but it's obvious that static will be more precise.
The imput data asks for the observed geodetic coordinates and the "fixed" XYZ coordinates, then the program preforms the calibration between the two systems and shows you the error it observes, there is no scale or anything like that to imput.

Many were trained to go to the field, set on a point and locate a couple of points with RTK, hit a couple of buttons and some blinking lights, a few beeps and bingo, the site was "calibrated". You can look at the "errors" and feel warm and fuzzy that you are good to go. What could possibly go wrong with that?:-S

 
Posted : April 29, 2016 6:40 am
(@dan-patterson)
Posts: 1272
Registered
 

Do a lot of you guys calibrate to only 2 points? I usually use at least 4. If I don't have that many, then I won't really calibrate. I will find another way to establish control....

 
Posted : April 29, 2016 6:57 am
(@mightymoe)
Posts: 9920
Registered
 

Dan Patterson, post: 369992, member: 1179 wrote: Do a lot of you guys calibrate to only 2 points? I usually use at least 4. If I don't have that many, then I won't really calibrate. I will find another way to establish control....

I would think that 4 is a minimum, if you aren't worried at all about elevations and only about rotating onto existing control, you can use 2, but it will cause all your data to have as much error as those two points carry between them. Seems dangerous to me, and of course without a Geoid it is a disaster for elevation control.

 
Posted : April 29, 2016 7:06 am
(@shawn-billings)
Posts: 2689
Registered
 

I don't have any experience with Trimble software. I'm just discussing the theory of it all. Scale factors based on observables can mask problems and cause issues with extrapolation as the OP describes (why doesn't he have residuals with two points). This is because the observed (GNSS) distance is being scaled (perfectly) to match the "known" distance. Unless several points are tied in this can give a bad result. It could still give a bad result if there is a systematic error in the control coordinates (incorrect ppm, incorrect prism constant, etc.) These systematic errors can be masked in a transformation based on observed scale factors. Then the inaccurate scale factor is applied to long GNSS vectors such that the 'error' is even worse.

Forcing to 1 is probably the right answer if the calibration is always built on a new projection surface at the ellipsoid height for the project, which is probably what it is doing. Some software performs the localization on a particular known projection, such as State Plane. Using 1 would be a poor solution, particularly in areas with a CF that is far from 1. The calculations in Carlson and Spectra Precision (TDS) for instance are based on projection Grid coordinates instead of LLH. So the localization (or calibration) will always have a scale factor that is in part due to the difference in the projection scale factors. This can be accounted for by using the CF as the scale factor when scaling from SPC to a Ground based coordinate system.

 
Posted : April 29, 2016 7:19 am
 jaro
(@jaro)
Posts: 1721
Registered
 

An issue that I see with calibrating to points that were based on SPC in Texas with a Trimble data collector is the data collector using a Mercator projection when the SPC is based on Lambert. If you calibrate to 4 points creating a box more or less, then the middle will be off if it's a very large box. There are ways of forcing a Lambert projection but I would have to go thru it again to remember how I did it.

James

 
Posted : April 29, 2016 7:34 am
(@mightymoe)
Posts: 9920
Registered
 

We are discussing two different things.

It's been a while since I've calibrated any file, probably 5 years. But it has always been a calculation between actual Geodetic numbers and plane coordinates,,,,,,,, for Trimble anyway.

Of course a scale factor is generated, but it isn't user defined, if you did, it might really mess up your file.

 
Posted : April 29, 2016 7:42 am
Page 2 / 2