Project 5: Lightfield Camera

CS 194: Computational Photography

Marisa Wong (cs-adb)

Overview

In this project, we use lightfield data to reconstruct images to reproduce the complex effects of depth refocusing and aperture adjustments with simple operations such as shifting and averaging.

Depth Refocusing

To reconstruct lightfield effects such as depth refocusing and aperture adjustment, I used the data from the Stanford Light Field Archive. The data is available as sets of 17x17 images with the offset of each image denoted in the file name. With the center of the image being (8,8) and (u, v) be location in the grid, again denoted by the file name, I computed the offset of each file from the center image. Next I performed horizontal and vertical shifts using u * c and v * c respectively. c is a constant that represents how much to scale the offset of each image. I used c values from -1 to 3 with a step-size of 0.1. A positive c value correponds to the front of the object being in focus while a negative c value corresponds to back of the object being in focus.

c = -0.1
c = 1.2
c = 2.6
Click here to see chessboard depth refocusing.
Click here to see amethyst depth refocusing.
Aperture Adjustment

I used the same Stanford Light Field Archive data from the depth refocusing part of the project. By setting a radius with the (8, 8) image as the center, and taking the average of images only within the specified radius, we can imitate different aperture sizes. Since we have a 17x17 grid. I took radius values from 0 to 8 with (8, 8) being the center. Smaller radius values mimic smaller apertures while larger radius values mamic larger apertures.

Click here to see chessboard aperture adjustment.
Click here to see amethyst aperture adjustment.
Bells and Whistles: Using Real Data

I created a 3x3 grid mini dataset of images I took on campus. Here are the images arranged in the grid.

Campus Depth Refocusing
Campus Aperture Adjustment

We see that the depth refocusing and aperture adjustment did not work very will with the images I snapped. The reason for this is that the images provided by the Stanford Light Field data were taken with such precision to make sure that when shifting each image and averaging them together, only a certain part of the image would be in focus. In addition, the dataset had 289 images while I only had 9 images. As a result, I was not able to produce high quality images since averaging out 9 images would not smooth out any misalignments compared to the averaging of 289 images. Though it does not look like the depth refocusing does anything, we have to look very carefully to see very small changes since we are only using a 3x3 grid rather than a larger grid.

Summary
I thought it was really cool that we can use simple operations such as shifting and averaging in order to produce complex changes in depth focus and aperture size after pictures are already taken.