Notifications
Clear all

Weighted Mean

41 Posts
13 Users
0 Reactions
5 Views
(@field-dog)
Posts: 1372
Registered
Topic starter
 

Setting up control for a topo using RTK. Office wants each control point shot 3 times, 3 minutes each session. I'm used to only 2 sessions each point. The second session I use a weighted mean. My question is, if I use the weighted mean after each consecutive session of 3 sessions, isn't that like taking the weighted mean of the weighted mean? Or do I throw out the outlier of the 3 and use the weighted mean of the remaining 2? I think I'm forced to use the weighted mean after the first 2 sessions if I want to reuse the same point number. I have to see how the software works. On the other crew, I would call the same point 1A and 1B, for example, average the coordinates in a calculator, then store the average as point 1. We're using Topcon MAGNET Field.

 
Posted : May 20, 2019 9:06 am
(@bill93)
Posts: 9834
 
Posted by: Field Dog

Office wants each control point shot 3 times, 3 minutes each session.

1. They should probably have also specified the interval between sessions, long enough to let the satellites move and give you different multipath effects, so you avoid blindly accepting short-term measurements that will move around over time.

2. How are the weights determined??ÿ The proper method is to have some measure of the quality of each session and determine weights from that measure.?ÿ Then combining two sessions would improve the quality of that average, and give you a stronger weight for combining that average with the third session.

 
Posted : May 20, 2019 9:39 am
(@field-dog)
Posts: 1372
Registered
Topic starter
 

1. Interval between sessions 3 to 4 hours.

2. I'm on point 1 for the second session. Storing as check point. Selecting "Use in weighted average."

 
Posted : May 20, 2019 9:46 am
(@steven-metelsky)
Posts: 277
Registered
 

How far apart are the points? I'd rather run through them with a gun and prisms on tripods to close out

 
Posted : May 20, 2019 10:29 am
(@field-dog)
Posts: 1372
Registered
Topic starter
 

2. I don't know how the weights are determined. The only thing I know of the quality of each session is the PDOP, H residuals, and V residuals.

 
Posted : May 20, 2019 1:11 pm
(@field-dog)
Posts: 1372
Registered
Topic starter
 

We set 5 control points spaced 500 to 700 feet apart. My party chief has no plans to traverse through them. We will run through them with a level.

 
Posted : May 20, 2019 1:16 pm
(@rover83)
Posts: 2346
Registered
 

If it works like other field software, the horizontal and vertical precisions, RMSE and associated covariance matrices are stored with the vector XYZ deltas in the controller. As you add observations and check the weighted average box, it will recompute the weighted average using all the observations you have checked, weighting them appropriately with the metadata. Sort of a "running average".

Also, during post-processing and QA/QC, you should be able to throw out/disable any vectors that did not meet specs or appear to be outliers. All that info should come into the post processing software for evaluation.

 
Posted : May 20, 2019 1:46 pm
(@thebionicman)
Posts: 4438
Customer
 

With rtk you hit diminishing returns around 20 seconds, 30 if the environment is marginal. You would do better to do a 20 by 20, rotating the pole 180 between. Separate two sessions like that by at least 20 plus minutes.

I have seen policies of triple rtk sessions several minutes long to be all but a complete waste of time. PP static sessions of 15 minutes or more give you better results in less time. And by PP, I don't mean OPUS..?ÿ?ÿ

 
Posted : May 20, 2019 2:00 pm
(@norman-oklahoma)
Posts: 7610
Registered
 

I agree. There isn't much to be gained after 20-30 seconds. But I've only relatively recently come to that conclusion. Up to 2 or 3 years ago I was a 3 minute disciple. So I don't really fault Field Dog's office on that too much.  

 
Posted : May 20, 2019 2:17 pm
(@dougie)
Posts: 7889
Registered
 
Posted by: Field Dog

Setting up control for a topo using RTK. Office wants each control point shot 3 times, 3 minutes each session. I'm used to only 2 sessions each point. The second session I use a weighted mean...

It sounds like you haven't worked here very long...

Have you asked the "office", what the next step will be; to establish the final coordinate that you will use?

You can ask for all the advice you want, here; but the bottom line will be "What the boss wants to do".

For what it's worth; The Bionic Man has given the best response, so far.

My 2 cents; you're setting up control for a topo? Is a couple of hundredths going to make a difference?

Dougie

 
Posted : May 20, 2019 2:21 pm
(@norman-oklahoma)
Posts: 7610
Registered
 

"....if I use the weighted mean after each consecutive session of 3 sessions, isn't that like taking the weighted mean of the weighted mean?"

I think that you may be confusing coordinates for data. Coordinates are not data. They are computed from data.?ÿ

 
Posted : May 20, 2019 2:24 pm
(@norman-oklahoma)
Posts: 7610
Registered
 

"Office wants each control point shot 3 times, 3 minutes each session. I'm used to only 2 sessions each point. "

If you take 2 shots and one is an outlier you have no way to know which one is the wayward son. A return to the field is needed, and time is lost.?ÿ If you take 3 then the outlier is immediately apparent.?ÿ So three shots is OK in my book, especially with less experienced field crews. 3 shots each separated by 4 hours obviously stretches the program into a 2nd day, which may not be in the budget.?ÿ I recommend rotating the rod 120?ø between each shot, partly to compensate for rod plumb-ness (or lack there of), but mostly to prove that such compensation is was unnecessary.?ÿ ?ÿ?ÿ

After the office sees the vector quality they are getting on a few jobs they may relax and allow shorter and fewer occupations. Work with them.?ÿ ?ÿ

 
Posted : May 20, 2019 2:34 pm
(@mightymoe)
Posts: 9920
Registered
 

I don't use the Topcon software, so I don't know how it's treating the points. The more important question would be how the RTK shots are being controlled, one point base near the project, or a VRS network?

I would do as others have suggested and faststatic it, 10 minutes?ÿfor the points, two observations per point, more than one receiver to get vectors between the control points, and like thebionicman says this isn't an OPUS job.

Static will allow direct connection between the control points that RTK can't do. Then a least squares adjustment. It's basically foolproof. Of course you need at least three receivers, the more the better.

But for your first question, I can't see how the program adjusts the points beyond a mean, add all three northings and divide by 3, maybe it's looking at something else, but RTK is a vector from a base or a network, tough to do much adjustment between those control points using that process, each point is standalone.

In clear sky with newer receivers, a couple of hundredths horizontal is SOP for RTK these days, so...ƒ??ƒ??..

Just look at each point, make sure there isn't an outlier, if I saw something over .05' horizontally I would look closer at it, the third point can be a sanity check, not a bad idea, I can see why they do it.

 
Posted : May 20, 2019 2:38 pm
(@norman-oklahoma)
Posts: 7610
Registered
 

"But for your first question, I can't see how the program adjusts the points beyond a mean, add all three northings and divide by 3, maybe it's looking at something else, but RTK is a vector from a base or a network, tough to do much adjustment between those control points using that process, each point is standalone."

Every RTK/controller combination I've used, save for one, not only records the vector but also vector quality data. And that one was a bug admitted to by the vendor.?ÿ That vector quality data (readily apparent in the StarNet formatted GPS vector data) is the basis for the weighting.?ÿ

 
Posted : May 20, 2019 2:51 pm
(@dmyhill)
Posts: 3082
Registered
 

The first problem is that you are using Topcon Magnet, and you are not using Carlson SurvPC. I too have that issue.

If I had a choice, I would shoot three or four different points, and then sit in the middle and run a resection routine in my DC for my topo control, if you added a fourth point, your DC software should show if any of the points should be thrown out. It often happens that where you can get good sky is not where you want to set up your total station.

Waiting between shots on a point is best practices, but I would love to see some of those studies performed with the current constellations available. I am guessing that the 20 minutes needed for GPS only is compressed quite a bit when you combine ours with the Communists sats.

?ÿ

 
Posted : May 20, 2019 3:39 pm
 Norm
(@norm)
Posts: 1290
Registered
 

You can still do it the old way with 1A, 1B etc in Magnet Field.  You can also choose to use or not use one or more observations in edit raw data on re-observations of the same weighted mean mark.  Turn use on and off and you can see by the results what is happening.  Three observations are always better than two for the warm fuzzy. 

 
Posted : May 21, 2019 1:20 pm
(@a-harris)
Posts: 8761
 

Weighted Mean is when you place your personal thoughts of how reliable the information is for that particular observation.

You look at the raw data and look at the amount of reliable data and the noise ratio and local objects that may interfere and the factual data of the observation as compared to each other observation around the job and on that particular point and rate the importance of that set of data.

IMVHO, when it comes to RTK, any and all data should be held as close as the best available unless something with one particular setup is way out of whack with the other readings.

I have an application where many differing locations for the same point can be input and an averaged location will be solved without weighted means.

I've used those solutions many times and truthfully, I can not say that is the best way and I am talking about positions that fall within 0.04ft of one another.

One of the sightings could have been the better of them all because we all work with an uncertainty of exactness on every observation we make and cannot truly declare any single one or the weighted mean closer to the exact location.

0.02

 
Posted : May 21, 2019 3:20 pm
(@leegreen)
Posts: 2195
Customer
 

The problem is you clearly don't know how to use Topcon Magnet Field.

In Magnet record the point with the same point ID each time. The software will prompt you to rename, overwrite or use as a weighted average. You can save the same point hundreds of time if you wish. Go to Edits Points and you can toggle on or off any individual shot.

Magnet 5.x can also compare the same point shot from a different base. This is much better redundancy than taking the same shot from the same base.

If you wish to learn the correct way to use Magnet, with fewer complaints - feel free to contact me.

 
Posted : May 21, 2019 3:25 pm
(@norman-oklahoma)
Posts: 7610
Registered
 

Probably the 20 minute figure is for use with GLONASS included. No GLONASS, longer separation. 

 
Posted : May 21, 2019 4:13 pm
(@norman-oklahoma)
Posts: 7610
Registered
 

"In Magnet record the point with the same point ID each time. The software will prompt you to rename, overwrite or use as a weighted average. You can save the same point hundreds of time if you wish. Go to Edits Points and you can toggle on or off any individual shot."

SP Survey Pro (ie/TDS) does the same thing, with the additional option of storing the observation without recomputing the coordinate. So does Access if I recall correctly. I believe that VIVA did that as well.

 
Posted : May 21, 2019 4:17 pm
Page 1 / 3