CS 194-26 (Computational Photography) Project 5 - CS 194-26-ael



Overview

A lightfield camera is a camera which, instead of capturing light at a single point (e.g. with a pinhole or single photosensor), contains an array of devices or sensors able to capture light. One such way to implement this is with an array of individual cameras:

Normally, we can model the capturing of the image in 3 dimensions (say, RGB color values). A lightfield camera allows us to, in addition, capture information about the directionality of light - or, the field of light present in the image (hence the name). Since each camera captures light at a different perspective, in aggregate, we get this additional information.
Having this additional information allows us to do some interesting things, as seen in the next part.

Depth Refocusing

Consider the behavior of light due to different sources: light originating far away from the point of capture will not change much with slight changes in the position of the sensor, wheras light close by will be very affected. Therefore, faraway points will appear about the same in all cameras in the array, wheras nearby points will be very different. However, some of these points will happen to overlap - therefore, if all the images are averaged then some portion of the image will be "in focus" while the rest will be out of focus.
Imagine then that we manually shift the images from the cameras in the array. Then, we can change the part of the image that is in focus. This is what we mean by "depth refocusing".
In order to accomplish this, we can select some image as the "center". We can then think about shifting the images from the remaining cameras to alight with this center. however, doing so will likely completely ruin the focus of the image (no part of it will be in focus!) Instead, we can interpolate along this translation and only "go partway". The following image is caused by shifting each image 5%, 10%, 15%, etc. up to 50% of the way along the translation to the "center" image (defined as the camera in the center of the array).

Aperature Adjustment

In a camera, the aperature controls how much light is let in. With a large aperature, a lot of light is captured and the image is generally less focused (depending on the focal point). With a small aperature (approximating a pinhole), the light is less diffused and the image is (generally) sharper.
We can simulate this with a lightfield camera. Less light is captured if we use a smaller number of cameras clustered centrally, wheras having more, spread out cameras, will capture more of the light field.
Therefore, we can set the "radius" of the light field camera by selecting a subset of the cameras centered around the middle camera. Specifically, we can choose the "radius" of the aperature, define the distance of a camera (as determined by its position in the camera array) from the center camera for each camera in the array, and choose only the cameras that have a distance less than the radius.
The following image is produced by selecting radii of size 0 - 7.

This can be even better observed with an image that is more centered: