One of the most significant trends shaping our future is the redefining of what is "real" and specifically, bringing everything and anything into heightened and full three-dimensionality (or more), definition, and fidelity. I refer to "real" in the sense that we believe it "exists", and that we are increasingly:
- Losing the ability to distinguish between what is real and what is unreal.
- Losing the ability to distinguish between originals and copies, real and synthetic, real and unreal, real and virtual, here and not here.
This trend includes such things as the so-called 3D web, virtual worlds, the Internet of Things, 3D scanning and printing, and 3D human/computer interactions and interfaces, to name but a few. Will there be any such thing as "unreal" in the future?
Since I am fascinated with and fixated on this topic, I'm going to develop it as a theme. Perhaps we'll call it "Oh Really?" and pursue it much further over a long period of time here at Off Course - On Target (OCOT). I've previously written a few articles on this subject, such as Coming Soon to a Desktop Near you: Massive Amounts of 3D for the Masses and will cover this area more, but we'll also get into some of the many other aspects of the changing (let's hope!) ways we interface and interact with technology and are making these much more "natural" and "real".
For today, I want to briefly bring your attention to some exciting new developments coming out of Adobe Systems R&D work on new 3D camera lenses and some software they've developed for processing the resultant images. As you'll see, this technology opens up whole new possibilities, not just for photography, but for some amazing new ways of "playing with reality" by enabling you to go back into previously photographed scenes and change the images. Adobe is referring to this "computational photography" and as with many of the stories we cover here at OCOT, this one is interesting not only for the specific example, but especially for the larger topics and issues it reveals.
Here's the story and it comes most appropriately from Dave Story, Vice President of Digital Imaging Product Development at Adobe, pictured here (thanks to Audioblog.fr) holding the original lens.
For a quick overview of Adobe's research, you may want to start by checking out "Adobe shows off 3D camera tech" on Crave. This topic originates from a recent demo Adobe did in France showing their initial R&D work with a prototype camera lens consisting of 19 different lens elements that provided multiple views at slightly different angles and what Dave described as being a bit like what a multi-faceted insect's eye would see.
Fortunately for us, Luc from Audioblog.fr was at the demo with his video camera and has put up this 10-minute video clip. When you first get to this site, you will also see that we still have a way to go with machine translation (in this case by Google), but bear with it and be sure to check out the video at the end to get the best understanding of what "computational photography" might lead to.
Of course, the serious fun begins once the hardware and software can take over and use these multiple images and angles to enable some very new and different possibilities. For example, they are now able to dramatically extend the concept and functionality of a "brush" in terms of what you can do with a "virtual brush" when working on photo images. In the video (and this screenshot from it) you can see Dave Story use what he calls a "focus/unfocus brush" to go into a photo and shift the focus from one statue to another in the photo. He goes on to suggest that they can also create a "3D healing brush" that would enable you to, for example, get rid of an obstruction in the original photo.
You will also see how they are able to move the "camera" after the photo has been taken. The movement in this case is very slight, but this idea of being able to capture moments and then go back and manipulate them AFTER the FACT is one of those possibilities which are equally and concurrently frightening and exciting. Something very powerful is going on here.
Take this out quite a bit further and consider the potential when we have a full set of 3D data for every single pixel in digital images! Imagine the manipulation you could do to both still and moving images; think about how you could go back into a scene or a "captured moment" and look at things from different angles, perspectives and focus. We've already been seeing advances in video camera work on movies and in televised sporting events, where they are able to move the camera through a full 360 degrees and all six degrees of motion, but now imagine YOU being able to move and manipulate the imagery on your own AND AFTER the fact!
The Future is Already Here
Or consider the uproar that has already been happening around the 3D "maps" that Google, Microsoft, and others are creating by having 3D mapping trucks drive through an area (large cities for now), taking a complete set of digital and laser images of the entire area. These images are then stitched together, so you can go from a spot on a map to "being there", enabling you to look around from that spot and see a full 360 degree surround of what you'd see if you were "really" there. The concern, by the way, is over privacy (or lack thereof) , and of what would be captured by all these images, which are constantly being updated.
This is another one of those things you can really only learn and appreciate by experiencing it, so if you have not already done so try this(I'll use Google for this example, Microsoft and Yahoo offer similar features):
- Go to Google Maps.
- Click on the "10 Market Street" listed in the left window (or anywhere in San Francisco for that matter).
- click on the "Street View" button on the top of the map area.
- Move the "little orange person" icon that shows up on the map to some intersection on the map.
- Move your cursor around in the street level photo image that appears to look around.
- Move your orange person icon up or down the street to look around there.
Scary? Exciting? Make you think about more possibilities if this is just rev 1.0?? YES!
And we think we have problems now (and we do) with not being able to tell the difference between an "original" photo, and one that has been altered! Just imagine the degree to which this technology scales those problems exponentially! Apropos to our larger theme here of full 3D reality and blurring the distinction between what is real and what is not, you can easily see how this recent example of "computational photography" is taking us in that direction and dramatically transforming what were previously just 2D photos, maps, and images.
"Computational photography is the future of photography," Story said. "The more things we can do that are impossible to do in a camera, the more powerful people's ability to express themselves becomes."
Quite true, and so once again, the great question that arises from such exciting new technology developments is what will you, and we collectively, DO with such newfound capabilities? And what might we want to agree NOT to do? What uses can you think of applying this to? What problems can you now resolve with this?
I hope you will enjoy our foray into the world of 3D and the new reality, which of course is really just a matter of us finally having technology and ourselves catching up to the world as it's always been; VERY real and very multidimensional. Oh Really?