Two weeks ago while home for Thanksgiving in Santa Barbara, I took a few extensive panoramic High Dynamic Range (HDR) photos in some very scenic spots. This picture is some 28 stills combined together and was barely possible for my computer to render into a regular panorama (I had to convert the RAW files to JPEG's and shrink them 1/4 the size), let alone a proper HDR. Eventually it'll be a weekend project to do the process right, demanding some play with a few different programs and a whole lot of render time.
And what will be the result? Another picture, pretty similar to this one.
So why bother? No real reason. I find it interesting, despite not having a valid argument as to why HDR's actually matter. They're not much more than interesting concepts and pretty unpractical for any normal use. So why do I do it? Because the historical significance of photography absolutely astounds me, and it feels like technology is finally catching up to making this practical.
First, we have the HDR factor. Right now, the color fidelity and light range of the camera is limited, so taking two extra exposures at +2 and -2 full stops and combining them together renders a picture with far more range: able to see fully into the shadows and highlights, so we're not losing any data from light limitations.
Second, as a panorama, I'm able to take a larger field of view into a single spot. Taking a massive picture with a solid 50mm lens at a very low aperture renders a lot of detail, especially when you intend the final image to be completed larger-than-life (a bit of a stretch in this instance, because 50mm isn't that long and I'm already shooting something larger-than-life, but my 70-300mm zoom has worn with age). The dream would be to have the tools used by the Gigapixel project, which is a servo-tripod that automatically triggers off exposures at a calculated overlap, taking out over- or under-compensation for the whole image.
xRez Studio 2009 Yosemite Reel from xRez Studio on Vimeo.
Not only does that video show a photo project that creates an historic image at an exact time and place, but touches on how it was also used for geologic study and safety- quantifying ongoing research as well as historical significance.
I keep touching on the historical aspect because when my grandmother passed away a few years ago I was given her boxes of prints and negatives that showed Santa Barbara back in the 1940's. How amazing would it be to be if we knew its GPS coordinates and took a photo of that exact spot now, comparing 60+ years of change? The technology is finally catching up: look at this example of Microsoft's Photosynth, which composites a photo taken from Harry Houdini's stunt at Mass Ave. Bridge to the bridge as it appears now.
This is what Microsoft, Google, and iPhoto have been head-to-head competing for: trying to establish themselves as new standards in photo organization. But there's a new field that I'm certain is where the competition lies and isn't fully public.
Google and Microsoft both have vast amounts of satellite maps and streetviews freely available online to show us above and on the ground. Sure, satellite maps give us an outstanding layout of where things are, but streetview never really seemed that practical in comparison. Over 10 years ago, Dr. Paul Debevec wrote his doctoral thesis on the principles of photogrammetry: rendering 3D maps of places based on just a handful of photos. In school, he flew a kite around UC Berkely campus and used around 20 shots of the belltower that a computer modeled into a 3D shape and projected the photo onto the shape, rendering a simple photo-realistic digital model (the technique was used in "The Matrix" and "Fight Club" in 1999 to quickly and affordably create virtual backgrounds). As an intern at Mahalo, I got to meet Dr. Debevec and made a Mahalo Daily about the process.
So what do you need to make a 3D model of something? Nothing more than a handful of photos around the object taken at different locations that a computer can align together, model and paste. Exactly what streetview has been doing for 3 years.
But for the most part, Google and Microsoft haven't been dabbling in 3D. A paid version of Google Earth offers a few buildings in major US cities rendered in 3D, but not much more. That is until last week, when Google announced their "Model Your Town" competition.
Though the competition is about manually generating 3D cityscapes, I think that this is a leg-up in 3D modeling on a massive, massive scale, and a move to get the public excited about the prospect. Modeling like this is expensive and very processor-heavy, so any assistance they can get in the process is very valuable (in fact hugely valuable if the only prize is your work being included in the software: 100% free labor). Not doing it for this long was probably 1) lack of demand, 2) lack of resources and reference elements, and 3) a bottom line. Is doing this a Google "organize the world's information," or offering the next generation of sell-able product for nearly unlimited uses (selling the software to news programming alone would make up the cost).
I'm certain this is at least what Google has up its sleeve next. Microsoft's Photosynth software has been doing this for 3 years, but in a less-visible, less-practical way, so I'm also certain they could roll this out quickly. And what we'll have then is a virtual map of America as it looked between 2007-2010. And every photo taken thereafter by anyone that logs the date and GPS location (and placed online) will update that map, logging its full history from-then-to-now.
>>UPDATE<< Again, I'm a week late in addressing this, but I think with Google Goggles, this TOTALLY makes sense from their perspective.
Take for example this StreetView of Bagel Nosh in Santa Monica. If Google was only able to take these two frames of the restaurant, that's all well and good while I'm in Streetview.
But this won't apply at all if I'm on the sidewalk and I take a pic from the building at any other angle. That's where 3D modeling would come in: Google would be capable of knowing the full extent of what this building looks like, so it doesn't matter where I take the Google Goggle photo from- they'll be able to figure it out. Taking a picture of a triangle and a circle, the computer sees two different things; tell the computer that it's the same object from two angles and the computer will know it's a cone.
There's the financial incentive for photogrammetry. Now if it's financially sound enough to do, that's for Google to decide.
No comments:
Post a Comment