Put this in the dumb question category if you wish, but...
A robotic TS tracks the prism. But what algorithm does it use to find the CENTER of the prism. If a prism is say, .2' wide, I know the edm in a standard TS can get a distance shot across just about the full width of the prism. The distance measured is unlikely to be far off regardless of where on the prism it sights.
But what about the direction? If it just gets within that .2', at 100' that's almost 5 minutes of arc. There's got to be some way the station finds the center of the prism, within some stated limits.
What is it?
It actual away casts a broad beam, and determines the area of highest reflectivity, then centers upon that area.
Non-robotic topcons (newer ones at least) have this functionality as well, but it is usually quite buried in the menus.
The non-robots are not able to dial in to the center of course, rather they just return a percentage of reflection.
I have only used this once. Set up on a highway overpass, back sighted two kilometers away at another overpass. Due to the exhaust and heat waves, there was no way I could visually see the backlight, so I kept adjusting the dials until it returned a distance, and then slowly adjusted for highest reflection.
Pay heed, this was not for any high accuracy work. If memory serves, we were tying edge of asphalt for a road widening.
It doesn't beyond that. Once the target is acquired, corrections are made to the vertical and horizontal angles displayed based on how far off the sight is from the target center. Acquire a target at 100 meters and look where the crosshairs are fixed. Reacquire the target. Often you can see the difference on where the crosshairs have reaffixed.
Hello rfc,
Apparently the reflected laser beam is projected onto a CCD array, and from the 'image' on the array a calculation is performed to determine the centre of mass of the return. From there it would be a simple calculation to determine the offsets to the crosshairs.
Unless other manufacturers have started doing this too, this used to be a Leica only feature. I suspect others have copied it as well as it reduced wear and tear on the servos and lengthens battery life. My old Topcon perpetually moves to follow the image of the prism.
Exactly. This is also how the manufacturers have improved searching to ignore reflected signals that aren't prisms. The shape of the "image" of the returned signal is studied by the robot's processors to determine if it looks like a prism not. If not it is disregarded.
The "image", I am guessing, is infrared and not visible light spectrum. And as others have said, the robot centers on that image (not a returned signal strength. Because it turns to the center of the image, the prism can be any size and be any distance from the robot. Also, if two images are returned (as is the case with a 360 prism that is perfectly pointed to the robot) the robot settles on an average between the two images.
> Exactly. This is also how the manufacturers have improved searching to ignore reflected signals that aren't prisms. The shape of the "image" of the returned signal is studied by the robot's processors to determine if it looks like a prism not. If not it is disregarded.
>
> The "image", I am guessing, is infrared and not visible light spectrum. And as others have said, the robot centers on that image (not a returned signal strength. Because it turns to the center of the image, the prism can be any size and be any distance from the robot. Also, if two images are returned (as is the case with a 360 prism that is perfectly pointed to the robot) the robot settles on an average between the two images.
So the published resolution or "accuracy", e.g. 2", 3" 5" etc. is a function of the density of the cells in the CCD? Unless they're using lower density (read: cheaper) CCD's in say a 5" instrument than a 1" instrument, wouldn't the only difference be the software (paying attention to the center 1" of arc, vs. 5" of arc), and, of course, the price?
Just curious.
the CMOS or CCD would need to be of enough pixel count to support the angular precision of the instrument but doesn't necessarily determine the angular precision of the instrument as that would also be determined by the circle precision also.
I don't know for sure, but I'm guessing a fairly low resolution image could be used with varying optics. In other words, for tracking, the instrument could use the image through a wider lens then to finely resolve the angle could switch to a lens with a narrower focus. But I'm guessing at that, because to have a 1°30' field of view populated with 1 arc second pixels would take about 22 megapixels which is a lot of processing.
Close one eye and look at a prism. The center of the prism is the pupil of your eye no matter which angle you look at it. When the Robot shines a light at the prism, it sees that light coming back from the center of the prism.
James
Leica told me that all displays are the same but have different test results. The very good test result displays go into 1" cases. If the displays being made that week were all very good, you get a 1" display in your 5" case.
> Exactly. This is also how the manufacturers have improved searching to ignore reflected signals that aren't prisms. The shape of the "image" of the returned signal is studied by the robot's processors to determine if it looks like a prism not. If not it is disregarded.
>
As I understand it this isn't quite what's happening, and I'll be more than happy to be corrected on this. The method I understand is much simpler and elegant. Apparently when the ATR finds something, the ATR laser is cycled on and off and anything that doesn't turn on and off is not a reflection of the laser, but something else that can be ignored.
> The "image", I am guessing, is infrared and not visible light spectrum. And as others have said, the robot centers on that image (not a returned signal strength. Because it turns to the center of the image, the prism can be any size and be any distance from the robot. Also, if two images are returned (as is the case with a 360 prism that is perfectly pointed to the robot) the robot settles on an average between the two images.
A 785 nm laser is used for the ATR in the leica 1200+ series of instruments, which is just past our visible limit of about 700 nm.
> the CMOS or CCD would need to be of enough pixel count to support the angular precision of the instrument but doesn't necessarily determine the angular precision of the instrument as that would also be determined by the circle precision also.
Yep, it's circle precision and CCD together.
It seems, according to Leica literature, that the CCD is at least as good as the circle precision after a certain distance, but is the limiting factor at shorter distances. This, I think, is because at some point the reflected signal changes size fairly dramatically from a big blob at short ranges to a pin point after something like 100m from memory.
>
> I don't know for sure, but I'm guessing a fairly low resolution image could be used with varying optics. In other words, for tracking, the instrument could use the image through a wider lens then to finely resolve the angle could switch to a lens with a narrower focus. But I'm guessing at that, because to have a 1°30' field of view populated with 1 arc second pixels would take about 22 megapixels which is a lot of processing.
The same could be achieved with no need to have as many megapixels as are needed to finely cover the entire FOV, nor a need for variable optics: If a lens is 'fish-eyed' towards the edges of the FOV and 'flat' and magnified towards the centre (the only place a valid ATR angle measurement and distance will come back from anyway) it could have fewer pixels needed to 'see' to the edges of the telescope and the precision and magnification to make accurate angle readings towards the middle.
> Leica told me that all displays are the same but have different test results. The very good test result displays go into 1" cases. If the displays being made that week were all very good, you get a 1" display in your 5" case.
I used to like to think this was true, but I don't think it is. The circles are graded but are sold appropriately. The specs are apparently maximums so you have some chance to get a circle in your total station which is anywhere down to the next lowest spec. For example a circle graded as 3.1" STDEV will go into a 5" instrument as will a circle graded as 5.00". Similarly when you buy a 2" instrument you could get anywhere from a 1.01" to 2.00" circle, and so-on and so-forth.
> So the published resolution or "accuracy", e.g. 2", 3" 5" etc. is a function of the density of the cells in the CCD? Unless they're using lower density (read: cheaper) CCD's in say a 5" instrument than a 1" instrument, wouldn't the only difference be the software (paying attention to the center 1" of arc, vs. 5" of arc), and, of course, the price?
No, I understand the CCD's are all just fine, but just your circle accuracy will determine how the instrument is graded and sold.
I have been doing more total station angular testing and I am now of the opinion that it's circle graduation long period errors that cripple any total station and determine it's specced accuracy and there's almost nothing practical you can do to overcome it except somehow map it and correct for it. I believe I may be able to roughly do it but could never be bothered as I don't do anything that requires it.
The total station I've tested appears to be very, very precise over small parts of the circle with absolutely fantastic repeatability, but pick any two random, widely spaced parts of the circle and the accuracy between them could be potentially worse than you might imagine given the instrument specs.
> Close one eye and look at a prism. The center of the prism is the pupil of your eye no matter which angle you look at it. When the Robot shines a light at the prism, it sees that light coming back from the center of the prism.
>
> James
Yes, that is correct if you're talking about rotating the prism (nodal or otherwise), while keeping the instrument fixed.
It's not correct if you mean that when you swing the instrument left and right across the prism. The distance measurement under such circumstances will remain the same, but certainly not the direction measured.