Lightfield Camera

CS 194-26: Computational Photography, Fall 2018, Project 5

Zheng Shi, cs194-26-aad

Project Overview

This project explores different ways of combining images from Stanford Light Field Archive. Images in the archive are taken over a regularly spaced grid. We will show how effective and powerful a simple idea of aligning and averaging can be.

1. Depth Refocusing

Life experience tells us that when we move our view point, objects close to us will move significantly farther than objects far from us. Let's see four images from the view point of four corners of the grid: (16_16, 16_00, 00_16, 00_00)

If we simply take the average of all images in the group, then the foreground will be very blurry. The closer the object is, the more it shifts across different images. So if we can shift images beforehand to move an object from all images to one common position, and take the average of those shifted images, we can expect an image in which only the aligned object is clear (or, on focus).
Since the images are taken by a grid of 17 by 17 cameras, I decided to align all images to the middle image (08, 08). I used a parameter α to determine how far to shift. For example, for the image (12, 05), I would shift it vertically by 4α pixels and shift horizontally by -3α pixels. I selected scipy.ndimage.shift function, as it allows to shift pixel units in float by interpolating nearby pixels.
Below are individual frames with varying α values (from 1.0 to -4.0)

Below is an animated display of simulating a camera focusing at different depths

2. Aperture Adjustment

For part 2, we will mimic aperture adjustment. This is done by take the average of a subset of all images. Sampling more images is similar to using a larger aperture. We would also sample images from nearby positions to simulate the large aperture. Each time I selected images within a circle of radius r from center image (08, 08). Larger radius makes the image blurrier and effectively mimics large aperture.
Below are individual frames with varying radius/aperture sizes (from 0.0 to 8.0)
Below is an animated display of simulating a camera of different apertures focusing on a common point (in this example, on the far end of the bracelet)

3. Summary

I used to think that a lightfield camera records depth information (possibly by ultrasonic waves like bats). It turns out that the same goal can be accomplished by much simpler techniques. This project shows me how a simple idea of aligning and averaging with a large dataset can produce desired results.