OK
Some of the stuff shown in the TSC2 file I haven't been able to get out of TBC but that doesn't mean it's not there, I might not have found it yet. Sure hope I don't have to retype all that stuff.
> OK
>
> Some of the stuff shown in the TSC2 file I haven't been able to get out of TBC but that doesn't mean it's not there, I might not have found it yet. Sure hope I don't have to retype all that stuff.
TBC should have the option of exporting the vectors in TGO format. That would work also. Some TBC user may be able to walk you through the process.
> TBC should have the option of exporting the vectors in TGO format. That would work also. Some TBC user may be able to walk you through the process.
In TBC: Select the desired vectors, click File|Export from the menu ribbon, in the Export pane click the Survey tab, select "Trimble Data Exchange Format (TDEF) exporter," specify the file name in the File Name box, then (finally!) click the Export button at the bottom. That'll produce a file that Star*Net can read.
Exporting vectors from TBC
Jim beat me to the punch. But I got pictures Jim!:-P
In TBC V2.9:
Click on the home tab;
Then click on the Export button (not the little drop-down arrow);
This will open the export dialog on the right of the window (default position?);
From the export dialog, select "Trimble Data Exchange Format (TDEF) exporter";
Click on the options button;
Select "select all"; this will select all the vectors in the project. You can alternately select only the vectors you wish to export by manual selection.
Then click on the windows explorer button to select the destination file and file name;
Then export your vectors;
Exporting vectors from TBC
Here is what you get: I had to put in some carriage returns, site said the line was too long.
A couple RTK vectors the base is pt 99. I haven't tried to mess with any settings if there are settings for this export.
[General]
Source=Trimble Business Center
ProjName=Not set yet!
ProjCoordinateSystem=Default
ProjCoordinateZone=Default
ProjGeoidModel=GEOID09 (Conus)
GPSVectors=MarkToMark
CoordinateUnits=meters
ElevationHeightUnits=meters
DistanceUnits=meters
AngularUnits=degrees
AntennaHeights=Raw
PressureUnits=?
TemperatureUnits=?
MissingValue=?
Separator=:
[Stations]
Station=2:?:200:39.323475477N:111.275854151W:2129.3009:4652.7257
:3045.9178:2145.4588:0:0:0:SC USFS SC:?
Station=2:?:99:39.309027622N:111.273346458W:2131.0530:3048.1741
:3262.2734:2147.2431:1:1:0:UDOT:?
Station=2:?:297:39.316266499N:111.275841343W:2138.0151:3852.1082
:3047.0225:2154.1871:0:0:0:STONES:?
[Keyed In Coordinates]
LLCoord=1:?:99:39.309027622N:111.273346458W:2131.0530:?:C:C:U:E:W
LLCoord=1:?:99:39.309027622N:111.273346458W:2131.0530:?:S:C:U:E:W
[Observed Coordinates]
[GPS]
Vector=1:?:99:200:167.7853:1027.1692:1240.2786
:3.9810952300e-006:-1.2756475000e-007
:1.0023109100e-006:2.5353150700e-005
:-4.6295749600e-006:3.9167348400e-006:1.5540:2.0625
:RTK:?:0.011:?:29 07 2014:19 14 24.0:29 07 2014:19 14 39.0:E:RTK
:Topo:Fixed:Unknown
Vector=1:?:99:297:-17.7196:547.7054:626.4198:2.6191271400e-006
:-1.1237550000e-007:4.9562510000e-007:8.7500602600e-006
:-3.9639512600e-006:1.0423722410e-005:1.5560:2.0625
:RTK:?:0.003:?:19 07 2014:23 55 32.0:19 07 2014:23 55 42.0:E
:RTK:Topo:Fixed:Unknown
[Terrestrial]
[Laser]
[Level Run]
[Reduced Observations]
[Azimuths]
Exporting vectors from TBC
> [GPS]
> Vector=1:?:99:200:167.7853:1027.1692:1240.2786
> :3.9810952300e-006:-1.2756475000e-007
> :1.0023109100e-006:2.5353150700e-005
> :-4.6295749600e-006:3.9167348400e-006:1.5540:2.0625
> :RTK:?:0.011:?:29 07 2014:19 14 24.0:29 07 2014:19 14 39.0:E:RTK
> :Topo:Fixed:Unknown
> Vector=1:?:99:297:-17.7196:547.7054:626.4198:2.6191271400e-006
> :-1.1237550000e-007:4.9562510000e-007:8.7500602600e-006
> :-3.9639512600e-006:1.0423722410e-005:1.5560:2.0625
> :RTK:?:0.003:?:19 07 2014:23 55 32.0:19 07 2014:23 55 42.0:E
> :RTK:Topo:Fixed:Unknown
That's the stuff that Star*Net will be looking for.
I have used StarNet to adjust RTK baselines. It's not common though because we usually use static or total station for control. I used the TDEF like others have suggested which StarNet can convert to its GPS format.
Also you can use a scalar in TBC on GNSS baselines just like StarNet.
> ...it is a mistake to say or imply that one can use least squares adjustment on non-redundant data. Details matter.
:good:
CONSTRUCTION OF THE MATHEMATICAL MODEL:
r = n - n0
c = r + u
WHERE:
n = number of measurements
n0 = minimum number of measurements for a unique solution
r = the number of extra or redundant measurements (degrees of freedom)
u = the number of additional unknown parameters to be solved for
c = the number of least squares condition equations
EXAMPLE:
The situation the debate is revolving about: the measurement of a single line, once.
GIVEN:
n = 1
n0 = 1
u = 0
r = n - n0 = 1 - 1 = 0
c = 0 - 0 = 0
CONCLUSION:
Least squares adjustment is not possible.
> CONCLUSION:
> Least squares adjustment is not possible.
Actually, the least squares solution to a minimally constrained network consists of the original input observations. The residuals are all zero and meet the least squares criterion to perfection. That is obviously a trivial case, but it is an adjustment within the least squares model.
As John Hamilton observed, the point that both he and I were making stands. The major least squares adjustment programs are comprised of at least two functional parts:
(a) the actual adjustment of observations using weights assigned to them and
(b) the estimation of uncertainties propagated through the adjusted network.
Even in a trivial adjustment with df=0, the estimation of uncertainties propagated through the adjusted network is a significant and valuable result, along with taking care of numerous computational drudgeries when surveys are computed on projection grids.
Let me revise that.
> CONCLUSION:
> Least squares adjustment is not possible needed.
"...As an example, consider the determination of a distance between two points. The distance may be considered a random variable and, if measured once, would have one estimate, so that no adjustment is needed."
Part 1, Concepts, 2.22 Least Squares Adjustment
Surveying Theory and Practice, Seventh Edition - Anderson & Mikhail
> Let me revise that.
>
> > CONCLUSION:
> > Least squares adjustment is not possible needed.
>
>
> "...As an example, consider the determination of a distance between two points. The distance may be considered a random variable and, if measured once, would have one estimate, so that no adjustment is needed."
>
> Part 1, Concepts, 2.22 Least Squares Adjustment
> Surveying Theory and Practice, Seventh Edition - Anderson & Mikhail
Yes, the single measurement is in fact the estimate that satisfies the least squares criterion. That result is, of course, the object of a least squares adjustment. In this case, though, it is a trivial adjustment with residuals all zero. As with all trivial cases, nomenclature can be problematic, but the result is the least squares estimate.
In your example you are relying entirely on the stochastic model (error estimates) to yield an estimated error analysis (not an actual error analysis) on a line that has been measured once.
If you were pressed to show defensible evidence of your accuracy claims, what would you supply?
Do you feel that your solution would stand unchallenged without actual statistical properties derived from redundant measurements of the line?
One could easily manipulate the stochastic model to make the adjustment (invalid adjustment, in my opinion) yield any required accuracy required/desired.
I believe your example oversimplifies the use of LSA software.
If I were pressed to defend my solution I would want to have:
- a detailed description of how I developed the stochastic model used for my estimated error analysis (which you expertly describe on this forum regularly)
- redundant measurements on the solution in question
> In your example you are relying entirely on the stochastic model (error estimates) to yield an estimated error analysis (not an actual error analysis) on a line that has been measured once.
>
> If you were pressed to show defensible evidence of your accuracy claims, what would you supply?
Fairly simple. You'd either describe test results and a long record of experience with the equipment (in the case of conventional measurements) or a consistent record of experience and statement as to why that is reasonably expected to be applicable (in the case of GPS).
If one is running both GPS and conventional survey measurements through combined adjustments with any regularity, he or she will have an extensively documented record of experience.
> Do you feel that your solution would stand unchallenged without actual statistical properties derived from redundant measurements of the line?
Unchallenged by whom? A reasonable person or someone who really doesn't care what the correct answer is? John Hamilton touched on this point in his post. If you have an extensive record of experience with particular equipment and methods, then the burden is to show why that record is not also a good predictor of results in instances without redundant observtions.
> One could easily manipulate the stochastic model to make the adjustment (invalid adjustment, in my opinion) yield any required accuracy required/desired.
That would not be consistent with extensive experience. Perhaps if you were to give an example, I might find some glimmer of reality in that view.
> ... or someone who really doesn't care what the correct answer is...
Yes, I was thinking of an attorney!
> > One could easily manipulate the stochastic model to make the adjustment (invalid adjustment, in my opinion) yield any required accuracy required/desired.
>
> That would not be consistent with extensive experience. Perhaps if you were to give an example, I might find some glimmer of reality in that view.
So you are saying, in the case of a single measurement (no redundancy), that one could not impact the magnitude of the error ellipses of positioned points by manipulating the stochastic model?
> So you are saying, in the case of a single measurement (no redundancy), that one could not impact the magnitude of the error ellipses of positioned points by manipulating the stochastic model?
No, what I said was that a surveyor who used the methods that I and other posters use would have a means of justifying the realism of either (a) the standard errors of some conventional measurement or (b) the optimism factor applied to GPS processor estimates of uncertainties.
If a surveyor doesn't adjust RTK vectors and has no quantifiable record of experience, then, yeah, it's all purely arbitrary as far as he or she would be concerned, I'm sure.
> > So you are saying, in the case of a single measurement (no redundancy), that one could not impact the magnitude of the error ellipses of positioned points by manipulating the stochastic model?
>
> No, what I said was that a surveyor who used the methods that I and other posters use would have a means of justifying the realism of either (a) the standard errors of some conventional measurement or (b) the optimism factor applied to GPS processor estimates of uncertainties.
>
> If a surveyor doesn't adjust RTK vectors and has no quantifiable record of experience, then, yeah, it's all purely arbitrary as far as he or she would be concerned, I'm sure.
Will manipulating the stochastic model impact the magnitude of the error ellipses?
> Will manipulating the stochastic model impact the magnitude of the error ellipses?
You mean more exactly : "Will using unrealistic values of the standard errors of measurements and observations produce different apparent uncertainties?" Sure, of course it will. This is why using realistic values determined by experience and testing is important, and using optimistic RTK processor estimates is a bad idea.
It's not like it's some incredibly difficult exercise to determine what the standard error of a direction taken with a total station should reasonably be expected to be. I've posted several examples of the test procedures on probably more than one occasion.
> > Will manipulating the stochastic model impact the magnitude of the error ellipses?
>
> You mean more exactly : "Will using unrealistic values of the standard errors of measurements and observations produce different apparent uncertainties?" Sure, of course it will. This is why using realistic values determined by experience and testing is important.
>
> It's not like it's some incredibly difficult exercise to determine what the standard error of a direction taken with a total station should be reasonably expected to be.
I think it is important that specifics aren't left out of discussions, especially in what is a very nuanced use of LSA software. Not all posters/readers here have a background education in LSA, or a 20 year history of performing LSAs.
> I think it is important that specifics aren't left out of discussions, especially in what is a very nuanced use of LSA software. Not all posters/readers here have a background education in LSA, or a 20 year history of performing LSAs.
Well, so far we haven't actually seen any posts from an unsophisticated user of least squares adjustments and error propagation software. Be sure to flag it if it ever shows up. :>