Light Field Camera

Zac Dehkordi

Overview

This project was inspired by this paper by Ng et al. (Ren Ng is the founder of the Lytro camera and a Professor at Berkeley!) in which he demonstrated that capturing multiple images over a plane orthogonal to the optical axis enables us to achieve complex effects using very simple operations like shifting and averaging. The goal of this project is to reproduce some of these effects using real lightfield data.

The data is used is from stanfurd using the chessboard data-set which features photos taken on a specific grid.

Depth Refocusing

My Approach:

  1. Using coordinates from the filename, calculate the offset of each image to the center image at position [8,8]
  2. Shift each image by S * (offset)
  3. Average the resulting images
  4. Vary S to simulate different vocal distances. Bigger S brings focus to the foreground, smaller S brings focus to the background

S = 0.0

S = 0.3

S = 0.5

Aperture Adjustment

Using the same idea, we can artificially increase the aperture size of the “camera” by using light field data. My algorithm is simple. Since the cameras are set up in a 17x17 grid, we can just average cameras that are in some radius R with camera [8, 8] being the center. R = 0 is just the center image while R = 8 is all the images averaged together. All other R represent the average of images that are in some radius R around the center camera.

R = 0

R = 5

R = 8

Summary

I learned that lightfields are cool and that I should invest in a light field camera!