All answers have to be guesses, but let's test our statistical intuition.
With 4:30 left in the Cotton Bowl game, Southern Cal kicks a field goal to make the score 45 - 30. With what confidence (68%, 95%, etc) would you say that USC will win the game?
With 4:07 left in the game, Tulane scores a touchdown and kicks the PAT to make the score 45 - 37. How does that change your confidence that USC will win the game?
USC mishandles the ensuing kick-off and starts inside its own 5 yard line. Confidence level?
On the second play, USC is tackled for a safety, making the score 45 -39 with 3:20 left. Confidence level?
9 seconds left, Tulane scores a touchdown, tying the score at 45. Confidence level?
Tulane kicks the PAT, taking the lead at 46 - 45. Confidence level?
USC attempts to lateral its way down the field but fails.
Lesson: There are events in the tails of statistical distributions, and they can and do occur. A 95% confidence level is not a guarantee of correctness.
?ÿ
95% confidence means you will be wrong 1 in 20 times. Certainly not good enough if it is a critical situation. Like perhaps the precast beams fitting? Or any situation involving safety. I can't understand how that came to be a commonly accepted standard.
If I were signing that something measured within a tolerance I would probably pick 3 sigma for my own use but deceitfully advertise it as 2 sigma, to lessen the chance of being proved wrong.
The standard for experimentally confirming a theoretical result in particle physics is 5 sigma.
deceitfully advertise it as 2 sigma, to lessen the chance of being proved wrong
I'm missing something how does your choice of which sigma to declare change the probability of your measurement being proven wrong. Or rephrased, I measured 64.47 within a tenth with a 3 sigma confidence level. I'm just as likely to be proven wrong whether I state it was measured with a 5 sigma confidence level or a 2 sigma confidence level. The tolerance doesn't change based on the confidence level I advertise. Nor does my measurement. The likelihood of being proven wrong remains the same.
how does your choice of which sigma to declare change the probability of your measurement being proven wrong.
It doesn't change the probability, it changes the expectation.?ÿ Promise a little, deliver alot makes for a satisfied customer.
@lurker?ÿ
If I can believe my statistics that say that I have 3-sigma confidence of being within the stated tolerance,?ÿ then the chance of being outside that range are much smaller than if my statistics say I have 2-sigma confidence.?ÿ What I tell someone doesn't change that, but if I say it was 2-sigma, they will put less reliance on it than if I say 3-sigma.
Under promise over deliver.?ÿ
I can't disagree with the final lesson statement, but the events as outlined appear to me to describe 7 distinct events each of which should have distinct expectation of the following outcomes.
So whatever the statisticians make of the first scenario (most likely a USC win), is only applicable at that point in time with the situation as defined with 4:30 left in the game.?ÿ IF it were an expectation of a USC win with a 95% confidence in that answer, then the ensuing events do not make the original expectation incorrect.?ÿ All it means is that something in the potential outside the 95% happened.?ÿ Exactly why there is a confidence interval reported.?ÿ Each subsequent second off the clock, score, or turn over redefines the situation.
From a very old data set collected with Topcon Hiper+ units set up a couple of feet from each other in a completely clear sky view area way back about 2005, too long ago and I don't recall the time intervals (may have been 60 second obs. every few minutes over the day), a full day of satellite movement provides the following:
Example of difference between 95% vs 99% applied to 341 observations of elevation using base/rover RTK measurements (standard dev. multiplied by value from distribution table):
95% = ?ñ0.077'
99% = ?ñ0.101'
Observed range of the measured values (highest minus lowest) 0.316'.
There is an additional ?ñ0.057' that actually did occur beyond the 99% error.?ÿ That is almost as far beyond the 99% as within the 95% error.?ÿ Those values beyond 99% occurred in the data set very rarely, but they did happen.?ÿ To me, 95% vs 99% seems a less important discussion in terms of surveying measurements than an understanding that the answer is only based on the data at hand.?ÿ Using only the first 21 observations from that data set halves the statistically expected error, but would that reflect the truth of the measurement system??ÿ Are the first 21 observations always better than the remaining 320 observations?
I know that within the ALTA discussion, we are talking about relation to other points and coordinate values instead of elevations.?ÿ But, in the example above, any one (or small group) of those observed values in my experiment could have easily been taken as being "good", and been very close to the reported location or three tenths from the reported location.?ÿ Redundancy is needed to confirm the most likely value.?ÿ A static session collected at the same time indicates that the average of all RTK elevation observations is only 0.012' higher than the static observation.
Contrary to how I often come across, I love seeing statistics and various forms of number crunching.?ÿ But square one should be is the data set appropriate for the type of analysis performed such that the answers I get appropriately answer the question at hand within whatever required accuracy/precision is needed.
You get it, Jon! Statistical solutions reserve room for outliers, but sometimes we're tempted to throw those out.