CS 194-26 Project 5

Lightfield Camera: Depth Refocusing and Aperture Adjustment with Light Field Data

Quinn Tran (abu)


Depth Refocusing

As in this paper by Ng et al. demonstrated, capturing multiple images over a plane orthogonal to the optical axis enables achieving complex effects using very simple operations like shifting and averaging. The goal of this project is to reproduce focusing in specific depth of views across an image using real lightfield data. We create "lightfields" of objects by taking photographs of them from consecutive plans orthogonal to the optic axis. In this project, we use data available in the Stanford Light Field Archive to reconstruct some lightfield effects.

The light fields for the chessboard were organized as a 17x17 grid of images. Each image has an (X,Y), the coordinates of the center of projection of the camera, up to an unknown scale. The difference between these coordinates and the center coordinates (calculated by averaging all the (X,Y) coordinates) is the offset (scaled by an empirically determined scalar) of the image. Shift each image by its difference * scaling factor, then average all the shifted images. I used 45 points on scales [-.3, .3] A more negative scale corresponded to the back of the object being in focus, while a positive one corresponded to the front.

chessboard, geode, legoknights

Aperture Adjustment

We emulate adjusting the aperture/field of view by setting a "radius" of images (distance of an image from the center image) in the 17x17 grid. The larger the radius, the larger the aperture and blurrier the image. The smaller the radius, the smaller the aperture and sharper the image. This makes sense because averaging more images that are different results in a blurrier image.

chessboard of radius [0,8]

Bells and Whistles

I used my 3D print of Zoidberg to model depth refocusing and aperture adjustments. I tried to create a 3x3 grid of Zoidberg, which resulted in 9 photos total. Things obviously didn't go too well because my manual movement of my phone camera wasn't that precise despite using measurements (for X) and the box in the back (for Y) as reference points. I did some manual cropping to find out image offsets and then rescaled for performance purposes. My scaling factors were [-.3, .3]. Although the "focused" image doesn't look "correct", the blurred effects look cool and we can see how reversing shifts that make the Zoidberg look blurrier would present a more focused picture. The best scale factor was around .15.

via GIPHY

Since I only had a 3x3 grid, I have two images for aperature adjustment. We can see that the more images averaged, the shallower the depth of field/ the blurrier the image.

center image (radius 0), radius 1

Reflection

This project was very simple, but required a lot of manual labor/sophisticated technology to generate the very developed inputs. I now appreciate the wonders of matrix rolls and the simple dual grid format of the lightfield.