CS194-26 Project 5: Depth Refocusing and Aperture Adjustment with Light Field Data

Omar Buenrostro (ach)

Overview

As the Ng et al. paper demonstrated, capturing multiple images over a plane orthogonal to the optical axis enables achieving complex effects using very simple operations like shifting and averaging. The goal of this project is to reproduce some of these effects using real lightfield data.

In total, a 17x17 grid of cameras were used to take the pictures, resulting in 289 images per lightfield dataset. These pictures serve as a sample of the plenoptic function: our idealized model of light.

Part 1 - Depth Refocusing

The first effect that can be reproduced is the idea of Depth Refocusing. Pretty much, we want the depth an image was taken at without going out and getting new images.

To achieve this effect, we need to average over all 289 images in the dataset. Before averaging, we need to shift every image by some offset determined by it's position to the center image (image 08_08).

Let x,y be the position of the center image, u,v be the position of the some other image, and alpha be an additional parameter that controls the strength of the shift. To allign two images, we need to shift the other image by alpha*[(x-u), (y-v)]. After shifting the images, we can then average all 289 images. By varying alpha, we can change the depth of our original image.

First I apply this procedure to the lightfield-chess images:

missing
Center Image
missing
Depth Refocused Images, alpha in [-0.7, 0.4]

Then I apply this procedure to the lightfield-amethyst images:

missing
Center Image
missing
Depth Refocused Images, alpha in [-0.2, 0.1]

For both examples, we see that by varying alpha, we can change the point of focus of our image, mimicking what would happen had the image been taken at different depths.

Part 2 - Aperture Adjustment

The second effect that can be reproduced is the idea of Apeture Adjustment.

To produce this image, we borrow the ideas of averaging and shifting from the previous parts. However, instead of averaging over all 289 images, we instead only average images around the center that are within r cameras from the center. By adjusting r, which is related to number of cameras we use, we capture more or less information, which inturn mimics an increasing of decreasing apeture size. I also chose a focus point (alpha) based on my results from part 1 based on what seemed to produced a center focus point.

First I apply this procedure to the lightfield-chess images:

missing
Center Image
missing
Apeture Adjustmented Image, alpha=-0.2, r = [0, 8]

Then I apply this procedure to the lightfield-amethyst images:

missing
Center Image
missing
Apeture Adjustmented Image, alpha=0, r = [0, 8]

For both examples, we can see that as r increases, the number of cameras included increases, which leads to the area of focus becoming progressivey smaller; exactly what would happen if we kept increasing a camera's aperture size.

Summary

The lecture on the plenoptic function made this project seem daunting at first. It was really surprising that simple averaging and shifted allowed us to produce complex effects. At first I had trouble believe, but I now know that everything is already in the data. Everything is already captured by the light.