CS 194 Project 5

Author: Anoop Baliga

Overview

In this project, we got to play around with cool light-field data from the Stanford Light Field Archive. In part 1, we got to work on depth refocusing, where we can focus on different objects at varying distances based on a simple add and shift algorithm. In part 2, we got to see what happens when we adjust the apeture settings to the overall quality and clarity of the output picture. I used the chessboard dataset.

Depth Refocusing

The general algorithm for the depth refocusing part of the project is as follows:

1. Read through the Stanford Lightfield Dataset to get the u,v coordinates from the image names.

2. Determine the reference image for your shift and add algorithm. I used the middle image where the u,v coordinates are 8,8 as my reference image.

3. Loop through all the images in the dataset and determine the relative x, y shift based on a given image and the reference image. This number changes as the value c changes.

4. Shift the given image by the relative shifts determined in part 3. Thanks to Alvin Wan for the shift algorithm on piazza!

Below are the images for C values = -2, -1, 0, 1, and 2 respectively. We see that for negative values of C, we get a clearer image of the back pieces and positive values of C, we get a clearer image of the front pieces. In addition, here are all of them in gif form!

Aperature Adjustment

The general algorithm for the aperature part of the project is as follows:

1. Read through the Stanford Lightfield Dataset to get the u,v coordinates from the image names.

2. Determine the reference image for where you start the aperature adjustment. I used the middle image where the u,v coordinates are 8,8 as my reference image.

3. Loop through and average all the images that are within a given range of the reference image. For example, the average image of 5 images within the reference image include the images with uv values (8,6) and (8, 7) and (8,8) and (8,9) and (8,10)

Below is the average image for the 5, 19, 87, and 289 images within the reference image (includes reference image) respectively. We see that as we add more images the clarity of front pieces get blurrier. This is because the lightfield dataset rotates the position front pieces but doesn't really change the position of the back pieces.