Lightfield Camera

Completed by: Lisa Jian

Depth Refocusing

We start with a grid of images with the same optical axis (i.e. all the pictures are taken on a plane perpendicular to the optical axis with different offsets on the plane). This is our lumigraph/light-field. If we simply average these images, the objects that are far away will look in focus because they do not vary their position in the images much. The objects that are closer will vary in position significantly, and in the averaged image they'll look more blurry. (Take the following as examples: If you take a picture of a mountain in the distance and then you take another after you shift your arm a bit, the mountain positions in the two pictures will not shift dramatically. However, if you are taking a picture of a jelly bean on a table close to you and you shift your arm even a bit, the position of the jelly bean will vary dramatically in the images.) However, if we shift the images in our lumigraph appropriately, we can focus on certain parts of the image.

We start by picking some reference image. Each image has a corresponding (u, v) coordinate that will let us determine its offset from the reference. We scale each offset by some constant c and shift our images according to the scaled offsets. When we average the shifted images, we will see a different depth in focus.

The following are some sample images of varying depth focus.

c = -0.2 c = 0 (Same as averaging all the images unshifted) c = 0.5

The following is a gif created with different "depth focuses".

Mimicking depth refocusing

Aperture Adjustment

The idea here is to fix some reference image and c (depth) from the pervious part. Now, instead of averaging across all the images, we average some neighborhood of images surrounding the reference image. Specifically, we pick some "radius" r and some reference image indexed by (x, y), and we average the images in the range [x - r, x + r] and [y - r, y + r], for x and y indices respectively.

The images below use image (8, 8) as the reference image and c = 0.2.

Averaging a 1 image Averaging 9x9 images Averaging 17x17 images

The following is a gif created with different "aperture sizes".

Mimicking increasing and decreasing aperture size

Summary

Lightfields are trippy. This is very reminiscent of our first project where we had to align the color channels, except now we care about aligning around a certain point in the image perfectly instead of the whole image perfectly. I couldn't imagine someone doing this by hand; that'd be painstaking work (lol thank god for computers amirite). Conceptually, this is really cool. It means that you could generate roughly all the same images that you would in that scene with a single camera (and lots of fiddling with aperture sizes and focusing).

Citations

Styling of this page is modified from bettermotherfuckingwebsite.com.

Source for lightfield images: Stanford Light Field Archive (specifically, the rectified images)