Activity Feed › Discussion Forums › Photogrammetry, LiDAR & UAS › Photogrammetry using Google Street View Imagery
-
Photogrammetry using Google Street View Imagery
Kent McMillan replied 7 years, 8 months ago 9 Members · 29 Replies
-
As it turns out, the Google Street View imagery is complicated by the fact that it is a composite of about eight wide-angle images that have been stitched together. I assume that the Google camera is oriented so that four of the lenses are pointing either parallel with the direction of travel of the camera car or at right angles to it and the other four fill in the quadrants between them.
The test project that I described in my earliest posts was lucky in that the image used was just a part of the image from the forward-looking lens and didn’t overlap onto adjacent images used in the composite. So that was probably why it worked like a single frame image. In the case of the image of the building on West 6th Street in Austin that I posted above, that image is most likely a composite stitched together from two different ones and can’t be treated as a single frame.
-
When I was with the highway department we were split into two divisions, ground and aerial. My office just happened to be the last one in the “ground” side and most of my prairie-dog-cubicle neighbors were with “aerial”. I picked up a lot of info from contact ‘osmosis’.
Photo analysis is close to witchcraft. At the time we were just being able to “rubber band” the digital images to “fit” the visible control. Everybody in aerial was convinced it was the neatest thing since sliced bread. And at times it worked. And then we eventually started seeing the dirty underside of manipulating images too much. Basically we determined you can wiggle it all around and ‘fit’ every control point with amazing results…and still be way off in other parts of the image.
From what I understand now, things have moved on to some amazing results. 20 years ago was just the beginning of digital image analysis.
As fancy as we were back then I still had to hand copy our results for the digital level loops through our aerial targets. I got a call from one of the photogrammetrists saying she couldn’t focus on one of the targets (on a flat bridge deck) by about 0.18′. When I went over everything I discovered I had transposed an elevation when I had hand copied the list. I was amazed someone could detect an error less than two tenths by stereo comparison from a focal altitude of 4000′.
There is a science there but you would probably have better results if you just had an old Kodak image of the block rather than the ‘Frankenstein’ Google image.
-
paden cash, post: 408295, member: 20 wrote: Photo analysis is close to witchcraft. At the time we were just being able to “rubber band” the digital images to “fit” the visible control. Everybody in aerial was convinced it was the neatest thing since sliced bread. And at times it worked. And then we eventually started seeing the dirty underside of manipulating images too much. Basically we determined you can wiggle it all around and ‘fit’ every control point with amazing results…and still be way off in other parts of the image.
One of the coolest software tools I use is Global Mapper. SOP for many investigations dealing with historical conditions of a tract of land 40 to 60 years ago is aquiring modern digital orthophotos (free from TNRIS for sites in Texas) and using them to control the rectification of aerial photos downloaded from the archives (also free of charge) of the US Geological Survey. Obviously, terrain displacement can be a major factor, but in relatively flat country (no need to mention any states that come to mind), the whole business works without surprises.
-
My latest hypothesis is that to get what are very nearly single-frame images from Google Street View, you compute the heading of the camera car from the successive latitudes and longitudes of the images in the sequence before and after the one of interest and enter that heading (in decimal degrees) into the “___h” part of the URL line of your browser with yaw set to 90, i.e. “90y” in the same URL. A refresh should bring up an image with the forward or backward line of travel at the center of the field of view, camera dead level.
Then, stepping the “____h” component of the URL in 45å¡ increments should bring up images with the center of the field of view of one image in the center of the Street View image, but with stitched margin showing some distortions. Some angular distance from the center of the image to where the stitching begins, though, should (hypothetically) be a single-frame image.
-
Here’s an example showing that same building on West 6th Street in Austin. all of the images are taken from nominally the same camera station, i.e, at the same lat and long, but exactly 45å¡ different in orientation, beginning parallel with the direction of travel of the camera car (which I computed as 288.18å¡ True from successive lats and longs in the image sequence along the street:
Heading = 288.18å¡
https://www.google.com/maps/@30.2695865,-97.748165,3a,75y,288.18h,90t/data=!3m6!1e1!3m4!1suBVIbtAztjtMfMoJ78qq3Q!2e0!7i13312!8i6656Heading = 243.18å¡
https://www.google.com/maps/@30.2695865,-97.748165,3a,75y,243.18h,90t/data=!3m6!1e1!3m4!1suBVIbtAztjtMfMoJ78qq3Q!2e0!7i13312!8i6656Heading = 198.18å¡
https://www.google.com/maps/@30.2695865,-97.748165,3a,75y,198.18h,90t/data=!3m6!1e1!3m4!1suBVIbtAztjtMfMoJ78qq3Q!2e0!7i13312!8i6656Heading = 153.18å¡
https://www.google.com/maps/@30.2695865,-97.748165,3a,75y,153.18h,90t/data=!3m6!1e1!3m4!1suBVIbtAztjtMfMoJ78qq3Q!2e0!7i13312!8i6656Heading = 108.18å¡
https://www.google.com/maps/@30.2695865,-97.748165,3a,75y,108.18h,90t/data=!3m6!1e1!3m4!1suBVIbtAztjtMfMoJ78qq3Q!2e0!7i13312!8i6656 -
Kent McMillan, post: 408333, member: 3 wrote: with yaw set to 90, i.e. “90y” in the same URL.
Except the “___y” component of the URL line is evidently field of view. and the “______t” line is the zenith angle of the camera orientation.
At first impression “45y” (45å¡ field of view) looks like a good standard setting for retrieving components of the Google Street View image that don’t have any stitch lines and can probably be treated as single-frame images.
-
One thing that I’ve discovered from examining Google Street View images is that you can verify the number of separate images that were stitched together to make the panorama just by checking the frequency of stitch artifacts in the panorama. If there were eight camera images, then there should be a stitch every 45 degrees.
-
Interesting.
Google has had at least two different optical systems in wide-spread use, one used fish-eye lenses, one didn’t. See: Google Street View: Capturing the World at Street Level
Metric Localization using Google Street View might also be worth a read if you haven’t seen it already – sounds like you’re not the first person to try and do this kind of thing. It also mentions an API for directly retrieving rectilinear projections of the street view images. I think they’re referring to the Street View Images API. It’s pretty well documented, and if you get an API key you should be able to get images directly if you want them:
https://developers.google.com/maps/documentation/streetview/intro -
One thing that I need to investigate further is the datum to which the lat and long position tags in Google Street View refer. To tes
Kent McMillan, post: 408337, member: 3 wrote: At first impression “45y” (45å¡ field of view) looks like a good standard setting for retrieving components of the Google Street View image that don’t have any stitch lines and can probably be treated as single-frame images.
This first impression hold up, but the “45y” setting actually returns an image from Street View with about a 76å¡ field of view, not 45å¡.
Log in to reply.