Project 5: Lightfield Camera

Kin Seng Chau cs194-26-aae

Overview

In this project, we use the lightfield data from the Stanford Light Field Archive to produce the effect of depth refocusing and aperture adjustment.

Depth Refocusing

Given an array of 17x17 images, where all of them have the same optical axis direction, if we average all the images, we would get an image which is sharp around the far-side of the object but blurry in the near side. If we shift each of the image with respect to the center (8, 8) by certain amount and average them, we would get an image with different depths. In order to choose the right shift, we calculate the differences of the camera position on the 17x17 grid with respect to the center. difference = (center_y - image_y, center_x - center_x). We then shift each of the image by difference * alpha, where alpha is a constant factor ranges from -0.5 to 0.2, with a step size 0.025. The more negative is alpha, the focus is closer to the near side of the image.

alpha=-0.5 alpha=0.2 gif

Aperture Adjustment

To simulate the effect of taking a picture with different aperture, we can choose a subset of the 17x17 images for averaging instead of using all of them. We choose the subset to be a set of images within the radius of the center image, a larger radius means a larger aperture because it simulates the situation when we let more light to project on to the observer. A larger aperture results in a narrower depth of field, and therefore if we choose more images around the center, then we should see a smaller sharp region and blurry otherwise. I fixed alpha=-0.2 such the focus is on the center part of the image, then I chose the radius around the center to be 1 to 8 with a step size 0.25.

r=1 r=8 gif

Summary

The lightfield project is pretty fun to work on. It helps imagine how cameras or lens with different apertures works, and how people could use microlens array to do very cool stuff