Notifications
Clear all

Using Star*net (or other LSA program) for reducing solar observations

13 Posts
4 Users
0 Reactions
1 Views
 rfc
(@rfc)
Posts: 1901
Famed Member Registered
Topic starter
 

In the Sokkia document linked in the other recent thread about LHA vs. Altitude methods of deriving Azimuth, it discusses two methods of reduction of the data. In the first, it looks like the AZ is computed for each of 6 pointings (3D, 3R); then, the same number of outliers are rejected from each of the D and R paintings, and then "averaged".

The other method (which Charles Dowdell used, according to the sample field notes he sent) is described in the Sokkia document:


I understand how to use Star*net (or other LSA program) to come up with the most certain angle between two stations, given the fixed and variable uncertainties involved, but the addition of time to the party leaves me a bit confused. Calculating the AZ for each pointing, then taking many pointings (both D and R), then running LSA using the standard errors for the instrument seems to make most sense, but it doesn't seem that it would separate out the uncertainties related to the sightings, vs. those related to time; they'd all be mushed together every time you do the calculation.

So the question is:
If one intends to use "modern" LSA techniques, which method will result in the "best" adjustment?

 
Posted : 04/02/2016 6:15 am
(@moe-shetty)
Posts: 1426
Noble Member Registered
 

ngs (dave lehman and others) taught us what they found is the best method; decide on leading edge or trailing edge and stick with it for that block of observations, bs target direct-fs sun direct, bs target indirect-fs sun indirect. they did not endorse mixing leading and trailing edges, nor did they endorse a single terrestrial back sight with multiple astro fore sights, claiming that method skews the weights of the observations

 
Posted : 04/02/2016 7:28 am
(@moe-shetty)
Posts: 1426
Noble Member Registered
 

let me dig up some papers they gave us. i will scan them and get them to you

 
Posted : 04/02/2016 7:29 am
(@weighted-mean)
Posts: 23
Eminent Member Registered
 

I applaud the thought and the potential application. As you've mentioned, during a solar bearing set, all the shots to the sun are to different points, so no particular way to mean them all together. Basically, the solar shot is its own black box and where star*net would come into play would be if you had multiple solar bearings at different points in your survey. Star*net would compare the bearings via their connecting measurements and give you a residual for each bearing, giving more weight to the better bearings in the overall adjustment.

 
Posted : 04/02/2016 7:32 am
(@moe-shetty)
Posts: 1426
Noble Member Registered
 

i would suggest making a series of astro measurements and reduce your azimuth observations from occupy station to back sight station first. you may then set up your pocket calculator in a statistics mode, convert each azimuth from dms to degrees decimal, enter each and check your standard deviations. this seems good, as you will be using a back bearing in star net and floating the observation appropriately.

 
Posted : 04/02/2016 7:54 am
 rfc
(@rfc)
Posts: 1901
Famed Member Registered
Topic starter
 

Moe Shetty, post: 356530, member: 138 wrote: i would suggest making a series of astro measurements and reduce your azimuth observations from occupy station to back sight station first. you may then set up your pocket calculator in a statistics mode, convert each azimuth from dms to degrees decimal, enter each and check your standard deviations. this seems good, as you will be using a back bearing in star net and floating the observation appropriately.

If I understand correctly, this method would take every single observation, be it D or R....Record the AZL...Calculate the true AZ for that moment in time, and repeat numerous times.
That would produce a data set consisting of "Delta Angles", or the differences between measured and computed directions, that could then be meaned.
For example, let's say you observe an angle to the Sun at approximately one degree increments, as follows:
90, 91, 92, 93, 94 etc., and at the instant of those observations, the formulae say the sun should be at:
89.9, 90.9, 91.8, 93.1, 94 (Say some were under, some over and some the same angle). That would produce a data set of:

-.10, -.10, -.20, +.10, 0.0

I know I can find the mean and standard deviation of those (-.06?), but what would I add/subtract that correction to?
This might be using statistical analysis to come up with the right answer, but I'm confused as to how LSA would be involved.

 
Posted : 04/02/2016 9:24 am
(@moe-shetty)
Posts: 1426
Noble Member Registered
 

no, reduce to simpler terms. use a bearing command in your data file. when you have your astro msmt's reduced (hopefully 2 direct 2 indirect or more). drop them in your pocket calculator and compute the statistics. now you will likely want to use the mean azimuth, but you can float the azimuth x number of arc seconds based on the statistical quality of the data set

as far as star net is concerned, use one data line only

 
Posted : 04/02/2016 9:38 am
 rfc
(@rfc)
Posts: 1901
Famed Member Registered
Topic starter
 

Well, isn't it true that if you're going to mean Direct and Reverse paintings, you must mean the times too? Taken in it's simplest form (1D, 1R), is it introducing any error at all making the assumption that the mean of those two readings was correct at the midway point in time between them, or that the error is so small as to just be ignored?

 
Posted : 04/02/2016 11:29 am
(@weighted-mean)
Posts: 23
Eminent Member Registered
 

each of the times is unique to each shot. no averaging times.
let's tinkertoy this one apart:
each shot in the bearing set, however many direct and reverse, is its own bearing, calculated from the ephemeris and the unique angle from sun edge to backsight AT the occupied point. Can't average each direct and reverse together because they are different angles at different targets (same target different place & time, right?) Where the averaging comes in is in averaging the resulting bearings. EDIT: here is where we mean together the observations into a measurement. They are one measurement because it's one setup.

The subtle thing here is that some "measurements" are not really measurements until you have lots of observations to mean out.
It's part philosophical and part statistical. For example, an angle set shot with forced centering on tripods would be reduced to one angle then entered as one line. Why? A false redundancy otherwise. All those shots are to the same fixed sights and contain the same centering errors. To enter each observation as its own line would give the overall simultaneous equation (your network) too many degrees of freedom. Starnet would start wiggling the centering errors differently for each shot while we know that they were all fixed for that set of observations. An interesting experiment would be to calculate the bearing for each shot and let star*net mean them.

Same thing for the bearing set. We do enough observations to get a good-enough measurement. Singular. For redundancy on the same two points, break setup (raise or lower tripods, turn tribrachs, re-center, re-level, drink coffee, wait for the sun to move, etc.), do another set. When you're all done with six or eight more observations, you've got a second measurement and two lines in star*net.

An observation isn't always a complete measurement.

Hope this helps with your growing awareness of how "it depends"

 
Posted : 04/02/2016 12:41 pm
 rfc
(@rfc)
Posts: 1901
Famed Member Registered
Topic starter
 

weighted mean, post: 356593, member: 9599 wrote:
Hope this helps with your growing awareness of how "it depends"

Sure does. Thanks! Now I just need to automate it somewhat. I have Timestamp on my iPhone, which gives me UTC according to multiple NTP servers, and reports error correction if needed. I have the DUT by date...I can get those all into an Excel Spreadsheet. Now if I can only figure out some way to parse the text data out of MICA for declination and LHA and get it into Excel, I could reduce massively redundant observations in no time at all.

 
Posted : 04/02/2016 1:19 pm
(@weighted-mean)
Posts: 23
Eminent Member Registered
 

ABBYY Fine Reader would scan MICA and give you excel ...

 
Posted : 04/02/2016 1:31 pm
 rfc
(@rfc)
Posts: 1901
Famed Member Registered
Topic starter
 

weighted mean, post: 356524, member: 9599 wrote: I applaud the thought and the potential application. As you've mentioned, during a solar bearing set, all the shots to the sun are to different points, so no particular way to mean them all together. Basically, the solar shot is its own black box and where star*net would come into play would be if you had multiple solar bearings at different points in your survey. Star*net would compare the bearings via their connecting measurements and give you a residual for each bearing, giving more weight to the better bearings in the overall adjustment.

I'd think "multiple solar bearings at THE SAME point in the survey" would also be useful. The next challenge (I'm not there yet), is how to characterize the errors associated with solar bearings. I'll cross that bridge when I get to it, but think I have a good plan. Assuming I can get my Excel spreadsheet down, I could throw in a solar bearing from just about all of the stations in my control network (except for those in the woods), and add them to the mix.

 
Posted : 04/02/2016 2:07 pm
(@scott-zelenak)
Posts: 600
Noble Member Registered
 

See Adjustment Computations for a discussion of this problem and an example.
Doesn't deal with multiple angles but a lot of things your not considering.

 
Posted : 04/02/2016 4:54 pm
Share: