Project 5: Lightfields


George Lee, cs194-26-adq

An Overview

A lightfield can be thought of as a collection of all of the rays of light coming in to the perspective of the viewer, that were bounced off of a background image. In the physical world, one way we can capture this data is by assembling a large array of cameras, and taking a picture of an object using all of the cameras simultaneously. Stanford University has done exactly this with their lightfield archive.

For this exercise, I implemented two applications of lightfield data as described by this paper. Two of the main ideas that we focused on were to play with depth refocusing, as well as aperture adjustment using the lightfield data.

Depth Refocusing

When taking a photo with a normal camera, one would typically take a photo and focus the lens on a fixed depth. When working with a lightfield, we no longer worry about how we focus the lens on a particular depth, as we can determine an image with any particular depth in focus. If we were to take several photos of an object at several points circling it, we would notice that parts of the object in the distance tend to move less than the parts that are closer to the viewer. This provides the intuition for why the objects in the distance would be in focus, if we were to naively average all of the images representing the lightfield.

To focus the image on a closer distance, we shift all of the images within the lightfield by a uniform amount closer to the "average image", in this case the image in the middle of the lightfield. To understand why this works, we need to understand the problem from a physics perspective. By bringing the rays of light closer within a lightfield, the rays of light representing an object are defracted less by the lens that it passes through. This causes the point of focus to go behind the object that we are viewing, leading to a myopic effect: objects that are closer to the viewer become more in focus.

We can see the results below.

Aperture Adjustment

To understand how aperture adjustment works, we must first understand how a pinhole camera works. Rays of light bouncing off an object pass through a pinhole, where the rays can be projected onto a screen. As the pinhole gets smaller, more of the entire resulting image becomes more in focus, as there are fewer "duplicate" rays of light hitting a particular spot on the screen. The duplicate rays of light come from slightly different angles after being reflected by an object, leading to a slight blur effect as a result.

We can apply the principles of a pinhole camera to a lightfield as well. If we take a smaller subset of the lightfield, it is analagous to having a smaller pinhole on a pinhole camera. Respectively, if we average the entire lightfield together, it can be seen as setting a camera to its largest aperture setting.

To generate the gif below, I sampled several different subsamples of the lightfield, refocused to a central part of the image using the method described above.

Summary

Lightfields are incredibly versatile, as they provide editing options for a budding photographer even years after an image was taken. Rather than focusing on how an object can be shot by a camera, a photographer can now focus on framing an object in the best possible way.