CS 194-26 Project 5: Lightfield Camera

Jennifer Liu (cs194-26-aag)

Overview: In this project, I simulated depth refocusing and aperture adjustment using lightfield data from the Stanford Light Field Archive. The datasets contain multiple images taken over a regularly spaced grid. The image datasets I chose to use for this project depict a chess game and lego soldiers.

Depth Refocusing: To simulate focusing on a scene at different depths, I first calculated a center by taking the average of all the x,y coordinates of the images in the dataset. Then, I shifted each image by (center_x-image_x, center_y-image_y)*d, where d is a constant scaling factor that specifies the depth at which the refocusing appears. When d=0, no shifting occurs, so the background appears to be more focused because objects in the background do not have much change in position among the images in the dataset. In contrast, objects in the foreground change position much more among the different images, so they appear more blurred when they are averaged. Increasing d increases how much the images are shifted by, aligning nearer objects to the center and blurring the background. In the gif below, d ranges from 0 <= d <= 0.5.

Aperture Adjustment: To generate images which correspond to different apertures while focusing on the same point, I calculate the center in the same way as the previous part. Then, I set the depth d=0.125 (somewhat arbitrarily), specify a radius, and only shift and average the images that have coordinates where the distance between the image coordinate and center <= the specified radius. The smaller the specified radius, fewer images are chosen to be shifted and averaged, and the higher the chosen aperture appears. Increasing the radius increases the number of images that are shifted and averaged, yielding the same results as if the image was focused at depth d=0.125, and appearing as if the chosen aperture was smaller. In the gif below, I vary the radius between 0 and 50.