Project 5: Lightfield Camera

Jonathan Chu

Background

In this project, we averaged a set of shifted images over a plane orthogonal to the optical axis to achieve camera effects such as depth refocusing and aperture adjustments. This image set was taken from the Stanford Light Field Archive.

Depth Refocusing

For depth refocusing, we shift the set of rectified images towards a center image and average all shifted images. If we do a simple average, the effect produced is that the objects furthest away are in focus, while the front is blurry. This is because as the camera shifts, objects in the front vary in position more than objects in the back, so when the images are averaged, the objects in the front become blurred while the objects in the back stay relatively in focus. Each image is taken at a different position in a 17 by 17 grid, so we calculated dx and dy to shift each image back to the center coordinate (8, 8). We also multiplied this shift amount by a factor, c, between 0.0 and 3.0. Collecting all the results allowed us to create the following gif.

c=0.0
c=1.25
c=2.5
Depth Refocusing GIF

Aperture Adjustment

To simulate different aperture settings, we averaged together a subset of images around the center image at (8, 8). We first define a radius for our base image, and we only average images whose shift amount was within that radius. With a smaller radius, not many images were sampled, so we were able to simulate a smaller aperture (background is pretty clear). And with a larger radius, we sampled more images and simulated a larger aperture (background is more blurry).

Radius=0
Radius=4
Radius=8
Aperture GIF

Summary

This project taught me that you can simulate cool photography effects just from doing some simple operations on the set of images. These effects that we simulate are possible through our understanding of how lightfield images are captured.