CS194-26: Image Manipulation and Computational Photography

Programming Project #5: Lightfield Camera

Allyson Koo

Project Overview

This assignment involved utilizing data collected from the Stanford Light Field Archive of photos taken from cameras located on a 17x17 grid. These images are captured over a plane orthogonal to the optical axis, allowing for different manipulations such as depth refocusing and aperture adjustment.

Implementation

Part 1: Depth Refocusing

This portion of the project involved manipulating the lightfield data to create images that appear to be focused at different depths. Each image in the dataset was named with all the relevant information -- its x, y coordinates in the 17x17 grid along with the u, v coordinates of where the corresponding "light rays" for that image intersect the original camera lens. We use these different images representing different light rays to create new photos focused at different depths. This was accomplished by selecting the middle image, the one taken from the camera located at point (8, 8) in the 17x17 grid of cameras, and using this location to align the other images to. For each image in the grid, we calculate the difference between its u, v position and the u, v position of the center image. To produce images at different shifts, we use varying alpha values and shift each of the non-center images by alpha * the difference between the u, v positions, where alpha varies from -0.6 to 0.2. After shifting the images, we average them. Performing this method with various values of alpha produced the following result.

GIF of chess board focused at different depths


Part 2: Aperture Adjustment

The purpose of this portion of the project was to create an image that appears to have been taken by a camera with different apertures. This was accomplished by choosing a point to focus on, then averaging together only photos within a certain radius of that point. I chose to focus on a point towards the back of the board. I tested different radius sizes (1, 3, 6, and 9) to produce images that appear to have focused on the same point but had different apertures. I used the radius along with the x, y coordinates of the image in the 17x17 grid to choose which pictures to average together. This effect was similar as aperture is used to control the amount of light entering a camera. Since each image in the light field data set represents light rays traveling through that point, by only considering images within a fixed radius from a set point, we are essentially blocking out other light rays, similarly to how aperture controls the amount of light entering a camera. This method produced the following result.

Examples

GIF of chess board viewed through different apertures

Part 3: Summary

I learned quite a bit about lightfields and how light influences photographs from this project. It was really fun to see how combining different light rays through relatively simple manipulations can produce fairly realistic and interesting results.