CS194-26 Project 5: Lightfield Camera
Joey Barreto
Depth Refocusing
All of the images below are taken of the same scene from a grid of cameras having parallel but translated optical axes. When images from all of the cameras are averaged, the result is an image with a background in focus and a blurry foreground. This occurs because the features in the background have less parallax--they appear to move less when the viewer is translated. The closer to the foreground, the greater the parallax. Knowing the translations of the cameras relative to the center allows us to produce images refocused at different depths. If dx,dy is the shift from a camera to the center camera, one only has to shift the image by c*dx,c*dy, because if a camera is shifted, say, up and to the right, the imaged features move down and to the left, and we want to undo this shift. C is a constant that is chosen manually. Below are some examples of this refocusing.


Unshifted average of images

c = -0.138

c = 0.224

c = 0.586

Aperture Adjustment
Equipped with the translation data, we can also mimic the effect of changing the aperture of a single camera. A grid of cameras is an approximation of one camera with a larger aperture. If we include more cameras in our average, we introduce the parallax blrurring which is analgous to allowing light from more oblique angles which focus at different points. Using fewer images in the average reduces this blur because it is like we are only allowing more collimated rays in our image, which focus more closely. In the examples below, I used images that were focused at roughly center depth, with c = 0.224. The radius indicates the number of cameras used in the subgrid for averaging, from the center camera (i.e. radius = 4 includes images from a a 9x9 grid centered at camera [8,8], with 17x17 cameras in total).


radius = 0

radius = 4

radius = 8

Summary
I learned that lightfields let you approximate some of what you can do with a single camera, and that parallax can be used artistically.