Notifications
Clear all

What IS the difference between a 1", 2", 5", 10" TS

13 Posts
10 Users
0 Reactions
1 Views
(@larry-p)
Posts: 1124
Registered
Topic starter
 

In an interesting thread below someone asked the question about what makes the difference between a 1", 2", 5" or 10" version of a Total Station when the TS is from the same manufacturer and the same series.

I am not 100% certain on this (but pretty close).

My understanding is in the manufacturing process the intent is to make every single TS to the highest standard. In the case described when they start every piece of hardware is intended to be a 1" gun.

Once the TS is complete (or very closely so) they undergo fairly rigorous tests. It is only then that the manufacturer knows how well the instrument will actually measure. It is the testing process that tells them which model was made. We have reached a point with manufacturing technology that imperceptible imperfections cause enough variation to impact the outcome.

I once had the pleasure of watching the testing process. Inside the plant there were a whole series of permanently mounted targets. Each instrument was set up on a fixed bench and programmed to spend 24 hours repeatedly measuring the targets. Very cool process to watch.

Larry P

 
Posted : 14/12/2014 4:31 pm
(@wayne-g)
Posts: 969
Registered
 

I'll bite a bit on this never ending topic.

The only common component is the operator of the instrument. All the rest is mechanical in one way shape or form, plus many procedural issues that are also operator driven.

 
Posted : 14/12/2014 7:35 pm
(@conrad)
Posts: 515
Registered
 

Hello Larry,

Thanks for the info.

From which manufacturer or manufacturers do you take your information, and which manufacturer was running these robotic(?)tests you mention? Before servo driven instruments, how do you imagine this test was done? Was this a test to punish the servos and bearings or was it to directly determine the specifications of the instrument?

Cheers.

 
Posted : 14/12/2014 8:35 pm
(@dave-ingram)
Posts: 2142
 

I'm going from memory a few years back, but when I bought my Topcon 3003 I seem to remember an explanation that was given to me about the differences in the various 3000 series instruments and involved some of the internal sensors and the readings of the circles.

Something like the "low end" unit read one side of each circle. The next one read both sides of the horz and one side of the vert. The next better read both sides of both - or something like that. I do believe that with Topcon (at that time) there was more of a difference than what final testing proved.

 
Posted : 15/12/2014 3:53 am
(@conrad)
Posts: 515
Registered
 

Hello Dave,

The more I think about circles being graded and sorted according to how good they are, and that the difference between 1", 5" and 10" is based on some moment to moment mood in the circle etcher, the more the idea seems unlikely. It seems plausible that some circles may be reserved for the lowest spec instruments, but I'll not be surprised to learn there's more to it, and the raw circle accuracy may not be the biggest part of it. I'd love to really know, even if it means my speculation turns out to be dead wrong.

 
Posted : 15/12/2014 4:05 am
(@dave-ingram)
Posts: 2142
 

Another point of reference would involve the Wild 2000 & 3000 series instruments from the 1980's that I've recently gotten interested in.

In going over the specs on these two instruments there are distinct spec difference between the circles for the 2 instruments - yet the shell of the instruments is identical.

Also, the optics of the instruments will contribute to the overall accuracy of the instrument. For instance you couldn't take a telescope off of an old transit and expect it to compare to the panfocal scope on a T3000.

 
Posted : 15/12/2014 4:44 am
(@paul-in-pa)
Posts: 6044
Registered
 

Assume for a minute that there is only one scribed circle with 1" marks. A single sensor could report to the nearest second reading. That would be your 5" gun.

Assume a second sensor is set opposite the first at 180°00'00.5". That would be a more precise gun. To be a true 1" instrument on that single circle would require 3 sensors, set to read to 0.3".

To get 1,296,000 discreet marks on a single circle is a bit of a task, so more than 1 circle can be used. More circles and more sensors are required. That being said putting more circles on less than perfect circles is a waste.

Whatever they do, it is pretty amazing.

Paul in PA

 
Posted : 15/12/2014 4:47 am
(@mneuder)
Posts: 79
Registered
 

I can't speak for other manufacturers, but I do know the process you describe is exactly how Leica does it.

 
Posted : 15/12/2014 5:22 am
(@plumb-bill)
Posts: 1597
Registered
 

They are graded and sorted based on how they perform after the instrument is assembled.

If a particular series of instruments has a top-of-the-line two second model they are somewhat trying to make them all two second models. The actual two second instruments are at one end of the bell curve, while the 3 second models make up most of the middle, and on the other end are the 7 second models.

2,3,7 is off of the Leica TS12 datasheet

 
Posted : 15/12/2014 6:03 am
(@larry-p)
Posts: 1124
Registered
Topic starter
 

> I can't speak for other manufacturers, but I do know the process you describe is exactly how Leica does it.

I was indeed at a Leica plant when the process was demonstrated and observed.

Larry P

 
Posted : 15/12/2014 8:38 am
(@richard-davidson)
Posts: 452
Registered
 

I would suggest that the angular encoders are different.

I would also suggest that the idea of "permanent" targets is antiquated.

 
Posted : 15/12/2014 8:39 am
(@mike-marks)
Posts: 1125
Registered
 

> In an interesting thread below someone asked the question about what makes the difference between a 1", 2", 5" or 10" version of a Total Station when the TS is from the same manufacturer and the same series.
>
> I am not 100% certain on this (but pretty close).
>
> My understanding is in the manufacturing process the intent is to make every single TS to the highest standard. In the case described when they start every piece of hardware is intended to be a 1" gun.
>
> Once the TS is complete (or very closely so) they undergo fairly rigorous tests. It is only then that the manufacturer knows how well the instrument will actually measure. It is the testing process that tells them which model was made. We have reached a point with manufacturing technology that imperceptible imperfections cause enough variation to impact the outcome.
>
> I once had the pleasure of watching the testing process. Inside the plant there were a whole series of permanently mounted targets. Each instrument was set up on a fixed bench and programmed to spend 24 hours repeatedly measuring the targets. Very cool process to watch.
>
> Larry P

I've also heard that's how it's done.

CPU manufacturers do something similar, all chips of a given class are fabbed using the same etch masks, then stress tested. The ones that can handle it are rated at the higher mhz specification (and priced accordingly since they're kind of rare). The ones that can't are rated at a lower mhz specification and sold at a discount. Interestingly, the highest quality chips are usually nearer the center of a wafer, and the duds and slower ones nearer the edges.

 
Posted : 15/12/2014 10:56 am
(@a-harris)
Posts: 8761
 

[sarcasm]Up close it would be about a gnat's hair[/sarcasm]

 
Posted : 16/12/2014 12:27 pm