Notifications
Clear all

Photogrammetry using Google Street View Imagery

30 Posts
9 Users
0 Reactions
5 Views
(@kent-mcmillan)
Posts: 11419
Topic starter
 

A question has recently come up about the location of a feature that no longer exists, but that appears in Google Street View images taken in 2008 and 2011. I'm fairly comfortable with using standard photo analysis techniques to solve the camera station and orientation of single frame images and to solve the coordinates of objects that appear in multiple frames, but have never worked with Google Street View images. Has anyone attempted this and, if so, can they offer any tips?

 
Posted : 08/01/2017 8:50 am
(@paden-cash)
Posts: 11088
 

Anything is possible Kent, but working with "uncontrolled" images it would probably be difficult and the results might not be definitive.

A number of years ago the firm I worked for had some salesmen come out and show us a shakedown of their photo-analysis hardware and software. We had control points and a few key dimensions to some building corners in a commercial office complex with several buildings. This outfit used our geometry and produced some pretty amazing results merely from photo-analysis. Some of their dimensions were spot on with ours. The price was prohibitive apparently but the experience was eye-opening.

What I see on google street view appears distorted and might play a factor in any analysis. Although I depend on google earth street view to reconcile the location of a pole or MH the crew missed, I don't know if one could get accurate enough to produce a defensible location...

...at least to your picky standards anyway....;)

ps - how's Jasper?

 
Posted : 08/01/2017 9:49 am
(@patrick-mcgranaghan)
Posts: 86
Registered
 

Have you ever checked out the GIS forum on reddit? https://www.reddit.com/r/gis/ They might be able to help.

 
Posted : 08/01/2017 10:30 am
(@bajaor)
Posts: 368
Registered
 

Sending Kent to a GIS forum? This should be good!

 
Posted : 08/01/2017 10:49 am
(@bajaor)
Posts: 368
Registered
 

RE Kent's question, those images seem to be quite distorted panoramic mashups. I suppose if you were sure your "control points" were part of the same original image as the desired object you might make something work. Those streetview images used to be stored somewhere in the browser cache or temp files but I just looked and can't find any (in the past I'd stumbled on those panoramic images somehow while scrolling thru images in IrfanView; they weren't rectangular, like the image I pulled off youtube, below). I often find myself estimating dimensions and locations seen in Streetview. Let us know what you find out, Kent.

 
Posted : 08/01/2017 10:54 am
(@jim-frame)
Posts: 7277
 

I've never tried to use StreetView images for measurement -- I'm not sure I'd even know how to start -- but when I need them for illustrating something I just do a screen capture rather than trying to find them in a cache somewhere.

 
Posted : 08/01/2017 11:08 am
(@kent-mcmillan)
Posts: 11419
Topic starter
 

paden cash, post: 408072, member: 20 wrote: Anything is possible Kent, but working with "uncontrolled" images it would probably be difficult and the results might not be definitive.

A number of years ago the firm I worked for had some salesmen come out and show us a shakedown of their photo-analysis hardware and software. We had control points and a few key dimensions to some building corners in a commercial office complex with several buildings. This outfit used our geometry and produced some pretty amazing results merely from photo-analysis. Some of their dimensions were spot on with ours. The price was prohibitive apparently but the experience was eye-opening.

What I see on google street view appears distorted and might play a factor in any analysis. Although I depend on google earth street view to reconcile the location of a pole or MH the crew missed, I don't know if one could get accurate enough to produce a defensible location...

...at least to your picky standards anyway....;)

ps - how's Jasper?

The imagery I'm looking at just appears to be wide-angle frames stitched together to make a panorama and the part of the image of interest to me pretty much looks like a photo taken with a moderate wide-angle lens. The method I've used in the has been to measure screen coordinates of features with known coordinates in the real world and to use the screen coordinates to both solve the camera station coordinates and a focal length of the lens scaled to screen coordinates.

Forming a resection to be tested for the variance of the residuals in Star*Net is one way to derive a best-fit estimate of scaled focal length and camera orientation.

 
Posted : 08/01/2017 12:50 pm
(@paden-cash)
Posts: 11088
 

Kent McMillan, post: 408100, member: 3 wrote: The imagery I'm looking at just appears to be wide-angle frames stitched together to make a panorama and the part of the image of interest to me pretty much looks like a photo taken with a moderate wide-angle lens. The method I've used in the has been to measure screen coordinates of features with known coordinates in the real world and to use the screen coordinates to both solve the camera station coordinates and a focal length of the lens scaled to screen coordinates.

Forming a resection to be tested for the variance of the residuals in Star*Net is one way to derive a best-fit estimate of scaled focal length and camera orientation.

There is no doubt in my mind one can make positional determinations from the photos.. As you've stated controlling and determining the location of the remaining features with actual field measurements is paramount. I'm sure you've got a pretty good idea of where you're going with it.

Let us know how it turns out with your Star*Net...there's nothing like splitting the pavement with a worn out rag tape...blowing a 0.15' paint spot at the "center"....then setting up the total station there and proceeding with standard accuracy survey..;) (you're starting to sound like an Okie...)

 
Posted : 08/01/2017 12:56 pm
(@spledeus)
Posts: 2772
Registered
 

The fellow at UGRIDD is very knowledgeable. I believe headed an app for extraction using cloud plus panoramic.
How close do you need to be?

 
Posted : 08/01/2017 4:18 pm
(@kent-mcmillan)
Posts: 11419
Topic starter
 

spledeus, post: 408116, member: 3579 wrote: The fellow at UGRIDD is very knowledgeable. I believe headed an app for extraction using cloud plus panoramic.
How close do you need to be?

I went ahead and made the calculation in Star*Net and got a solution for the camera station with a standard error of +/-0.14 ft. in N and +/-0.20 ft. in E, which is close enough for this one. The azimuth from the camera station to the object has a standard error of +/-0å¡15', which gives an uncertainty of about +/-0.36 ft., which is also close enough.

Basically, the method of solution consisted of taking a screen capture from Street View and resizing it in Photoshop so that the frame was 20.00 inches wide, with the principal point being 10.00 inches from the edge of the frame.

Then, I digitally scaled distances from the edge of the frame to various objects in the view and converted them to distances from the center of frame. The initial assumption was a scaled focal length, f, of 20 inches. the distances. D, from the center of frame to the objects could be converted to angles as arctan(D/f).

That generated directions to five objects in the scene with known coordinates from which a least squares solution of the camera station could be easily made.

The first guess as to f=20 in. for the scaled focal length generated angles which yielded an LS solution of the Camera Station. Successive values of f generated the following error factors in Star*Net

f Error Factor
20 6.5
21.5 1.25
22 2.36
25 10.0

So f=21.5 was near a minimum. It would be possible to refine f even more by testing values around 21.5, but 21.5 was probably close enough for government work with residuals in angles averaging about 0å¡02' .

 
Posted : 08/01/2017 8:36 pm
(@paden-cash)
Posts: 11088
 

Kent McMillan, post: 408129, member: 3 wrote:

I went ahead and made the calculation in Star*Net and got a solution for the camera station with a standard error of +/-0.14 ft. in N and +/-0.20 ft. in E, which is close enough for this one. The azimuth from the camera station to the object has a standard error of +/-0å¡15', which gives an uncertainty of about +/-0.36 ft., which is also close enough.

We all knew you could do it.

 
Posted : 08/01/2017 8:46 pm
(@kent-mcmillan)
Posts: 11419
Topic starter
 

paden cash, post: 408130, member: 20 wrote: We all knew you could do it.

I was looking for an even easier way instead of the iterative method. I have an idea that I'm going to try.

 
Posted : 08/01/2017 8:57 pm
(@kent-mcmillan)
Posts: 11419
Topic starter
 

BajaOR, post: 408088, member: 9139 wrote: RE Kent's question, those images seem to be quite distorted panoramic mashups. I suppose if you were sure your "control points" were part of the same original image as the desired object you might make something work. Those streetview images used to be stored somewhere in the browser cache or temp files but I just looked and can't find any (in the past I'd stumbled on those panoramic images somehow while scrolling thru images in IrfanView; they weren't rectangular, like the image I pulled off youtube, below). I often find myself estimating dimensions and locations seen in Streetview. Let us know what you find out, Kent.

I'd think that the panoramic images could be used just like any normal single-frame photo if they were cropped to make images with fields of view of about 50 degrees. Once the camera station was solved from one frame, it probably could be applied to solve the orientation of successive, uncontrolled frames with sufficient overlap to include objects with control coordinates in the next otherwise uncontrolled frame. In a 360å¡ panorama, there would be a closing condition at the end of the round in that the direction to the first object in the first frame should be identical to the same object in the last frame.

 
Posted : 09/01/2017 6:28 am
(@kent-mcmillan)
Posts: 11419
Topic starter
 

Here's an example for which I can provide control coordinates for the four building corners. The object is to solve the coordinates of the camera station and the azimuths to two features, the Pine Tree and the Blue-painted Pipe Sign Standard.

The photo was sized to a width of 20.00 inches for convenience. That is, the center of the frame is 10.00 inches from both edges and the horizontal image distances from the center of frame can be obtained by simply subtracting the distances noted on the pdf from 10.00 inches.

Attached files

Photo1.pdf (105 KB) 

 
Posted : 09/01/2017 8:20 am
(@paden-cash)
Posts: 11088
 

Kent McMillan, post: 408179, member: 3 wrote: Here's an example for which I can provide control coordinates for the four building corners. The object is to solve the coordinates of the camera station and the azimuths to two features, the Pine Tree and the Blue-painted Pipe Sign Standard.

The photo was sized to a width of 20.00 inches for convenience. That is, the center of the frame is 10.00 inches from both edges and the horizontal image distances from the center of frame can be obtained by simply subtracting the distances noted on the pdf from 10.00 inches.

I applaud your efforts Kent. You've put some serious thought into "slapping down a 21st. century scale" and coming up with a good SWAG. Your geometry is sound but still (out of your control) dependent upon an unknown image. It appears as though you were lucky enough to find enough fixed positions within a single frame. Had the field of view been a mosaic would that have affected your results?

PS - If I send you some 1963 photos of Dealey Plaza maybe you could determine the actual location of the shooter behind the "grassy knoll"...hmmmm?

 
Posted : 09/01/2017 8:46 am
(@kent-mcmillan)
Posts: 11419
Topic starter
 

Kent McMillan, post: 408179, member: 3 wrote: Here's an example for which I can provide control coordinates for the four building corners. The object is to solve the coordinates of the camera station and the azimuths to two features, the Pine Tree and the Blue-painted Pipe Sign Standard.

The photo was sized to a width of 20.00 inches for convenience. That is, the center of the frame is 10.00 inches from both edges and the horizontal image distances from the center of frame can be obtained by simply subtracting the distances noted on the pdf from 10.00 inches.

In case anyone wants to work the Camera Station coordinates in the example I posted above, here's the listing of the control coordinates on the building corners seen in the photo (in the same order as they appear in the photo, Left to Right).

C 1502 10071156.665 3112812.778 ! ! 'BLDG.COR
C 1285 10071183.894 3112721.616 ! ! 'BLDG.COR
C 1294 10071196.850 3112675.920 ! ! 'BLDG.COR
C 1295 10071203.914 3112652.000 ! ! 'BLDG.COR

Not that it particularly matters for the purposes of computing the Camera Station coordinates by resection, but the Combined Scale Factor for that site is 0.999944 in the Texas Coordinate System of 1983 (Central Zone):

 
Posted : 09/01/2017 8:51 am
(@kent-mcmillan)
Posts: 11419
Topic starter
 

paden cash, post: 408188, member: 20 wrote: I applaud your efforts Kent. You've put some serious thought into "slapping down a 21st. century scale" and coming up with a good SWAG. Your geometry is sound but still (out of your control) dependent upon an unknown image. It appears as though you were lucky enough to find enough fixed positions within a single frame.

The way that you can test the results, though, is by computing the positions of other features in the imagery (not used in the solution of the camera station and scaled focal length of the lens) and comparing the derived positions to their surveyed coordinates. In the example on West 6th Street in Austin, there are two objects, a Pine Tree and a Sign Standard, that appear in two different frames for which just that can be done.

 
Posted : 09/01/2017 8:54 am
(@imaudigger)
Posts: 2958
Registered
 

Sounds like it's purely an office exercise at this point in time.
I'd want to locate several (existing) adjacent objects and verify method in the field.

OOPs sounds like you are already heading down that path...

 
Posted : 09/01/2017 8:56 am
(@kent-mcmillan)
Posts: 11419
Topic starter
 

imaudigger, post: 408193, member: 7286 wrote: Sounds like it's purely an office exercise at this point in time.
I'd want to locate several (existing) adjacent objects and verify method in the field.

OOPs sounds like you are already heading down that path...

The original exercise involved locating an object that had been removed, but the statistics of the least squares adjustment that solves the camera station coordinates provide a measure of the random errors inherent in the analysis.

 
Posted : 09/01/2017 8:59 am
(@dave-karoly)
Posts: 12001
 

Kent McMillan, post: 408194, member: 3 wrote: The original exercise involved locating an object that had been removed, but the statistics of the least squares adjustment that solves the camera station coordinates provide a measure of the random errors inherent in the analysis.

I assume you are getting directions to the desired (but missing) object?

Wouldn't StarNet give you error ellipses for the solved coordinates of the missing object?

 
Posted : 09/01/2017 11:30 am
Page 1 / 2