Project 5: Lightfield Camera

By Dorian Chan - aec

As opposed to raw images, lightfield information allows for simple manipulations of input data that result in interesting effects. In this project, we demonstrate two of these effects, namely refocusing and aperture adjustment.

Part 1 - Depth Refocusing

Fundamentally, far away objects are less shifted in the output images of our lightfield camera array. Along the same lines, closer objects are more shifted in the output images. Thus, we can achieve a refocusing effect by shifting our images varying amounts and then averaging the shifted versions.

This is the original, nonrefocused version of a chessboard:

Original

Refocusing this chessboard over a range of shifts:

Refocusing

Another example of a tarot scene:

Original

Refocusing this tarot scene:

Refocusing

Part 2 - Aperture Adjustment

We can also achieve an aperture adjustment-like effect by applying a similar averaging process. With a smaller aperture, a smaller radius of cameras receive data in the lightfield array. With a bigger aperture, more of the cameras receive data. We can approximate that effect by averaging fewer or more cameras, based on their distances from the center of the array. As the below examples show, the depth of field changes as we do the above process, roughly matching an aperture adjustment.

Chess Aperture

Tarot Aperture

Bells and Whistles - Interactive Refocusing

We can adapt our refocusing setup from above to allow for interactive refocusing - the user simply picks a point on the image to focus to, and we output an image refocused to that point. We do this by taking that point, looking for that same point across the entire array of images and its position. We then set our refocus shift to match that calculation.

We select a pawn to refocus to:

Select

And refocus the image:

Interactive

Fin

As the above demonstrates, lightfields are much more powerful than just a single image because they give us real light information - instead of just color data, they also tell us directions. Using this extra information, we can perform effects that normally require specialized hardware using only general data.