CS194-26 Project 5: Lightfield Camera

Josh Zeitsoff, cs194-26-abi

Overview

Using images from the Stanford Light Field Archive that were taken from different places on a plane orthogonal to the optical axis, we can simulate camera operations such as depth refocusing and changing the aperature. Doing so requires taking the images and shifting them or subsampling them to produce output images that simulate focusing at different depths or narrowing or widening the aperture of a camera.

Depth Refocusing

Our goal is to focus on the image at different depths by "shifting" each image towards a chosen center image. To achieve an output image, we average all of the images in the data. Due to the fact that nearer objects shift more when the camera angle changes versus farther objects, output images generated without shifting would be blurry around nearer objects and more focused on farther objects. Thus, to produce output images in focus, we choose to shift each image towards a chosen center image by alpha * (x_diff, y_diff). X_diff represents the difference in the x directon between the current image and the chosen center image, and likewise for y_diff. Alpha was experimented with as a constant in the range [-1, 1], and each image below lists the alpha value used to create that image. In particular, the alpha range used is [-0.6, 0.2]. Outside of this range, pictures came out very blurry, whereas alpha values in this range produced images where a single part of the image is very cleary in focus.

alpha = -0.6

Snow

alpha = -0.2

Mountains

alpha = 0.2

Forest

Gif of alphas [-0.6, 0.2] Forest

Aperture Adjustment

Given the same images from the Stanford Light Field Archive, our goal was to simulate the effect of having a larger or smaller aperture. The size of the aperture affects the amount of light hitting the image, with a smaller aperture resulting in a higher depth of field, and a larger aperture resulting in a narrower depth of field. We can simulate this effect by treating each image on the grid of 17x17 camera used to take the images in the Stanford Light Field Archive as a light ray. Starting from a given center image resulting from the camera in the center of the grid (so 8,8), we average the surrounding images within a given radius. Smaller radii lead to smaller apertures and likewise for larger radii. Below, you can see that the image generated with radius=1 shows the background very much in focus, which we would expect for a smaller aperture. The image generated with radisu=8 shows only the near foreground in focus, with the background blurry, which we could expect with a wider aperture.

radius = 1

Snow

radius = 5

Forest

radius = 8

Mountains

Gif of aperture adjustments from radii [1, 8] Forest

What I learned about lightfields

The idea of taking images at different angles orthogonal to the optical axis is interesting. At first, I didn't understand the need for shifting the images in depth refocusing. However, seeing that nearer objects move more when the camera moves relative to background objects makes sense that simply taking an average of all images would produce a blurry image. I also learned how to simulate widening or narrowing the aperture on a camera and the effect of changing the aperture.