Notifications
Clear all

FieldGenius .raw to Star*Net .dat convertor and multi-sets.

5 Posts
4 Users
0 Reactions
5 Views
(@crashbox)
Posts: 542
Honorable Member Registered
Topic starter
 

I've ran both MicroSurvey's FieldGenius and Star*Net for a while now and while I certainly like both programs, the supplied convertor (at least the last time I ran it!) only exports a single, averaged or weighted average 'DV' and 'M' record for Star*Net usage no matter how many sets you wrap. This has been an irritation to me for some time now, as Star*Net cannot make use of the repeat measurements done in the field. However, last year I wrote my own convertor in Python which produces DV and M records similar to what MicroSurvey's Carlson .rw5 convertor does. That is, it provides an average of each individual direct/reverse set and stores it accordingly in the .dat file.

However, one remaining issue was when multiple foresights are observed in a single setup. FieldGenius copies the DV records to each successive foresight in the .raw file which results in duplicate records in the .dat file. This will obviously produce incorrect results in Star*Net. So, I finally wrote a second program which comments out the duplicate DV measurements in the .dat file if there are less than five minutes between the compared date/time stamps and DV records exist.

Anyway, the two programs need some refinement on the user interface level and I'm sure they can use more error trapping but they are functional as it is, assuming the observation order is (Face 1 backsight/Face 1 foresight/Face 2 foresight/Face 2 backsight). I'm pondering the thought of publishing the two programs on GitHub if there's any interest. I can see where this may benefit others and as far as I'm concerned, it does no good if I'm the only one benefiting from it.

I've attached a PDF of a sample .dat file generated from the two programs in case anyone is curious.

 
Posted : 28/07/2021 2:29 pm
(@john-hamilton)
Posts: 3347
Famed Member Registered
 

I believe that adjustments should be done on the mean of the sets, and (possibly) use the standard deviation of the sets as the weight, or alternatively use the manufacturers SD for one observation divided by sqrt(n)

Why do you think it better to use every individual observation??ÿ

One interesting phenomena I have noted with star*net in adjustments of deformation networks (lots of observations, lots of redundancy) is that the error factor for angles and zenith distances is usually close to 1, but the error factor for EDM is usually around 0.6, when using a weight of 1 mm+1 ppm (manufacturers spec). The end client has tried to tell me that I should use smaller weights, but I believe that the limiting factor in EDM is the measurement of temperature (1?øC~1 ppm), and dropping the 1 ppm would imply that I can measure the temperature better than that which is not the case unless I am being super careful.?ÿ

 
Posted : 29/07/2021 4:32 am
(@rover83)
Posts: 2346
Noble Member Registered
 
Posted by: @john-hamilton

the error factor for EDM is usually around 0.6, when using a weight of 1 mm+1 ppm (manufacturers spec)

I use TBC for about 80% of my adjustments (Starnet for the rest), and that is about what I see as well, somewhere in the 0.6-0.8 range, for networks with high redundancy.

We upgraded all our total stations not too long ago and apparently got a batch of overachievers. The S7s are nominally 1" instruments, but consistently outperform that spec when running adjustments. It's not unusual for us to see an error factor of 0.5 for the horizontal angles. Haven't really seen that happen before.

I too would be cautious about treating the individual observations in a set as independent measurements during adjustment.

 
Posted : 29/07/2021 5:31 am
(@crashbox)
Posts: 542
Honorable Member Registered
Topic starter
 

Back in the days when we booked observations, my PC taught me to take the average of each individual D/R set and throw out the high and low ones; we did a minimum of 4D/4R and I still do for most control today. The MicroSurvey Carlson .rw5 convertor essentially replicates this aside from deleting the high/low ones, and the one I wrote does this for FG .raw files.

Of course, we used the compass rule back then for traverse adjustment.

I should add that since Star*Net now has the "Normalize" function enabling reverse angle "M" records, there may be better ways of converting and supplying the data other than a D/R average (I should add that they came out with this function when I was about two-thirds completed with mine... sigh).

In sum, the convertor basically reproduces how I used to book them. In fact, I booked my observations for private/hobby work until I got a decent DC not all that long ago.

 
Posted : 29/07/2021 9:04 am
(@james-w-johnston)
Posts: 17
Active Member Registered
 

Hey, there is "Python Slinging for Surveyors" webinar tomorrow, free for CLSA members and $50 for the public:

https://www.californiasurveyors.org/events.aspx

Regarding FieldGenius to STAR*NET conversion options, I'll suggest some alternatives:

Simple Geo Spatial Solutions has an online one that the owner has customized to suit his needs:

https://sgss.ca/convert/to_starnet.html

And if you have MicroSurvey CAD it is possible to import the raw file into a traverse and then export to Starnet.?ÿ F1/F2 measurements WILL be included in that output.

I've been a Python hobbiest for years, message me or contact MicroSurvey support if you'd like me to confer.?ÿ You've taken the first step which is to pick a project.

?ÿ

?ÿ

?ÿ

 
Posted : 05/08/2021 3:48 pm
Share: