Overview

As this paper by Ng et al. (Ren Ng is the founder of the Lytro camera and a Professor at Berkeley!) demonstrated, capturing multiple images over a plane orthogonal to the optical axis enables achieving complex effects (see this gallery - hover around different part of images) using very simple operations like shifting and averaging. The goal of this project is to reproduce some of these effects using real lightfield data.

Depth Refocusing

The objects which are far away from the camera do not vary their position significantly when the camera moves around while keeping the optical axis direction unchanged. The nearby objects, on the other hand, vary their position significantly across images. Averaging all the images in the grid without any shifting will produce an image which is sharp around the far-away objects but blurry around the nearby ones. Similarly, shifting the images ‘appropriately’ and then averaging allows one to focus on object at different depths.

Moving all photos towards tghe center of the grid, (8, 8), by a scalar multiplied by its distances from the center. Here we use from -0.4 to 0.8, we can reproduce a refocusing effect.

Original Picture Depth Refocusing
Original Picture Depth Refocusing
Original Picture Depth Refocusing

Aperture Adjustment

Averaging a large number of images sampled over the grid perpendicular to the optical axis mimics a camera with a much larger aperture.

Original Picture Depth Refocusing
Original Picture Depth Refocusing
Original Picture Depth Refocusing