CS194-26 Project 5 - Lightfield Camera

Parsa Fereydouni - cs194-26-agy

Overview

As this paper by Ng et al. demonstrated, capturing multiple images over a plane orthogonal to the optical axis enables achieving complex effects using very simple operations like shifting and averaging. The goal of this project is to reproduce some of these effects using real lightfield data.

I will demonstrate the power of these simple mathematical operations on images from the Stanford Light Field Archive and display the results below

Part 1 - Depth Refocusing

In this part we shift the grid pictures and average which creates a moving focus effect.
We control the location of our focus effect by shifting each image to a centralized reference and multiplying by a scalar, which we will see in the gif.

The objects which are far away from the camera do not vary their position significantly when the camera moves around while keeping the optical axis direction unchanged. The nearby objects, on the other hand, vary their position significantly across images. Averaging all the images in the grid without any shifting will produce an image which is sharp around the far-away objects but blurry around the nearby ones. The result of this averaging can be seen below:

In [15]:
 

Similarly, shifting the images 'appropriately' and then averaging allows one to focus on object at different depths. The appropriate shift depends on which depth we want focus on and from which position in the camera grid was the image taken. The poisiton of the camera is given by the image file names, so the shift can be calculated by multiplying the distance of the camera from the center camera by some constant (different constant for different depths)

In [90]:
 
In [87]:
 

As seen above, by increasing the constant factor in shifting, we move the depth of our focus. By repeating the above step and gradually increasing this constant from 0 (no shift, far-away objects in focus) to .7 (largest shift in this sequence, near objects in focus) we can better see the process of refocusing.

Part 2 - Aperture Adjustment

Averaging a number of images sampled over a large grid perpendicular to the optical axis mimics a camera with a much larger aperture. Using images from a smaller grid results in an image that mimics a smaller aperture. In this part, we are going to generate images which correspond to different apertures while focusing on the same point.

Recall the image resulted from averaging all unshifted images (far-away objects in focus):

In [107]:
 

The above image has the highest aperture because it is an average of all the images. Now, by only averaging images from cameras a certain radius from the center camera, we can recreate the same image with different apertures. Below you can see two such radius values:

In [105]:
 
In [114]:
 

By repeating the above step and increasing the radius from 1 (just the center camera's image) to 10 (all images) we can better see the process of adjusting apertures.

Summary

This project demonstrated how a simple trick can make a big difference. This idea is very straight forward and yet does magic. I enjoyed the simplicity and elegance of it.