Lightfield Cameras

CS 194-26: Image Manipulation & Computational Photography // Project 5

Emily Tsai

Depth Refocusing

When we move a camera to take photos of a scene while keeping the optical axis direction unchanged, objects that are the furthest away from the camera change their position in the resulting images far lesser than closer objects do. Intuitively: suppose someone finds themselves standing directly behind a friend who is sitting down on Memorial Glade, and the Campanile in the background also directly behind the friend. When the viewer is admiring the Campanile from this spot, both the friend and the Campanile are in the viewer's center of vision. Suppose then that the viewer moves two to three steps to the right (or left) without changing the direction their eyes are facing; their friend will be almost completely to their left (or right) and almost out of their visual field, but the Campanile will still pretty much be in the viewer's center of vision.

Thus, we can take many photos of a scene from slightly different perspectives along the same optical axis, and then average them together to get a resulting photo that has far-away objects clearer and close-up objects blurred out (because they moved more across the individual photos). However, by shifting the layered images to line up at different points, we can change the depth of the photo. To do this, we multiply each individual image's original shift from the center point (u, v) by a constant C to get the new shift (x, y), shift the images by the new shift (x, y), then average these images to get a new focus at a different depth level.

cards

single cards photo
all cards images averaged to create depth
cards with depth refocusing

jewelry

jewelry with depth refocusing
all jewelry images averaged to create depth
single jewelry photo

panning

Additionally, if we modify one line of our code so that we account for the offset at which the photo was taken from the "center image", and then multiply by C and shift the image correspondingly, we can create images that change in depth with this "panning" quality as if we were moving the camera forwards and backwards.

Aperture Adjustment

Changing aperture sizes changes the depth of field of an image. Large apertures decrease the focus range of an image so that less of the image is in focus; smaller apertures, on the other hand, increase the focus range so that more of the image is in focus. We can achieve similar results by increasing and decreasing the number of photos we average together from a set of photos sampled from a grid perpendicular to the optical axis. More images averaged together imitates large aperture settings, and less images averaged together mimics small aperture settings.

stone

Stone with small aperture (fewer images)
Stone with large aperture (more images)
Stone with varying aperture adjustment

jewelry

Jewelry with small aperture
Jewelry with large aperture
Jewelry with varying aperture adjustment

movement

Again, if we tweak one line of our code so that we first account for each image's original offset from the center image, we can also add another element to the aperture adjustment: movement-like quality of the object.

Conclusion

I really enjoyed learning the theory and finer details behind how aperture and depth-of-field works and putting it to work with these lightfield images!