CS194 Project 5: Lightfield Camera

Jacky Tian

Overview

Perception of depth and aperture control can be done with any modern camera. However, what if the picture was already taken? In fact, this can be done if there are multiple images of the object from different angles, because these different images create the perception of depth, and by averaging them we can control depth and aperture of the final image output.

Depth

Averaging all of the images allows us to focus on the distant chess pieces. However, if we shifted the images by a predetermined amount, say the distance that the cameras were from each other, multiplied by a controlled alpha, we can align the images to focus somewhere else.

Aperture

By aligning only a fraction of the images, we can create the effect of controlling aperture. I noticed that each image along was focused, but when we added the images from nearby cameras to it, it became blurrier and blurrier. Thus, from the center of the cameras, the (8, 8) position, we can control the aperture by averaging only the images taken from cameras within a radius r.

Summary

I learned that lightfields can be controlled by simply averaging about a point on an image. Obviously, that comes at the expense of precise cameras and picture taking, so in reality, it's probably easier to get a lightfield camera and do this automatically.