CS194-26: Project 5

Rachael Wu (cs194-26-acr)

Overview

The goal of this project was to use light field data from the Stanford Light Field Archive to simulate various image adjustments. More specifically, we took images taken over a plane orthogonal to the optical axis, shifted these images towards a single point, and averaged them to implement depth refocusing and aperture adjustment.

Part 1: Depth Refocusing

The first part of the project was to implement depth refocusing. In order to do so, we first designated a central point (u, v) that all the other images' points would shift towards. In this case, we selected the point defined in the image at (8, 8), which is the image at the center of the 17 x 17 grid. Afterwards, for all other images with points (u', v'), we shifted them by an offset of (d * (u - u'), d * (v - v'')), where d is a constant scaling factor that determines the depth of focus. Finally, we averaged all the shifted images to get a final result. Below are our results for d = 0 to d = 0.6:


From the images, we can see that for d = 0 (ie: where we average images without shifting them at all), the chess pieces that are farther away from the camera are sharper, since there is little variation in their positions across all the images. In contrast, for larger d values, we shift and adjust the images to better align other parts of the image, making other chess pieces sharper instead.

Part 2: Aperture Adjustment

For this part of the project, we use the light field data to mimic aperture adjustments. A smaller aperture means that the amount of light that reaches the camera sensor also decreases, leading to sharper pictures. Thus, instead of using all images in our average and varying our depth of field, for this part we set a specific d value and only use images from a smaller radius of the grid. For example, for a radius of 3, we only use images for which 7 <= u, v <= 9.

For this part, we set d = 0.2 and use a radius of 1, 3, 6, 10, 13, and 17:

Part 3: Summary

From this project, I learned that light field datasets could be used to simulate seemingly complex camera effects, such as depth refocusing and aperture adjustment. This can be used to simulate humans' visual experiences, which is crucial to areas such as virtual reality.