Lightfield Camera: Depth Refocusing and Aperture Adjustment with Light Field Data

CS 194-26: Computational Photography, Fall 2018, Project 5

Nathan Petreaca

cs194-26-afq

This project involves using a dataset from a lightfield camera to change the aperture and focus of an image.

1. Background

A light field camera is basically an array of separate cameras evenly placed on a plane. What this allows us to do is average the images with particular constraints to create images of the same scene with different aspects changed, such as focus and aperture.

Here we use images from the Stanford Light Archive.

2. Depth Refocusing

To create an image with a focus point in a particular place, we get each image in the array of images and shift them to a particular center and then average them. For example, if I wanted to create an image with the focus being the front of the chess board, I simply get each image and shift them to the position of the camera that was physically closest to the bottom of the chessboard (this would be the bottom camera in the camera array). Imagine getting each image and moving them so that each image shares a particular set of pixels in common, but not the rest. What this does is keep the position of interest that we shifted the images to in focus but blurs the other parts of the image.

Depth Refocusing

3. Aperture Adjustment

To create an image with a different aperture, we simply sample only a certain amount of camera images from the center of the camera array. For example, if I wanted to increase the aperture of an image, all I would need to do is average only the images from cameras near the center of the camera array and vice versa. So, imagine just taking the image from the center camera of the camera array, this would be the highest aperture possible, taking all the images from all the cameras of the camera array and averaging them would be the lowest possible aperture.


Aperture Adjustment

4. Summary

More cameras = more possibilities.