Project 5: Lightfield Camera

CS 194-26: Computational Photography

Caroline Moore – cs194-26-aew

For this project, I used a grid of images to retroactively compute images with a different focus point or the aperture size than the original images were taken with. The images I used were from the Stanford Light Field Archive, which used a grid of 17x17 cameras, resulting in 289 images.

Depth Refocusing

When two images of an object are taken from slightly different locations, the far parts of the images will align but the close parts will be in different locations. This means that if the two images are averaged, the close parts will look blurry but the far parts will be clear. If we shift one of the images so that the close parts align and the far parts don’t, averaging the images will result in an image were the close part is clear but the far part is blurry. We can extend this concept to the 289 images in the Stanford Light Field Archive to produce averaged images with different focus points.

Each image has a  value that indicates its position. I shifted each image by  where  is the  vale of the first image,  is the  value of the first image, and  is a scaling factor in the range of . For example, you can see the clear difference in focus between the two images below. The left image is focused at the bottom of the stone and the right image is focused at the top.

I also made videos moving between many of the images.

Aperture Adjustment

Similarly, we can average images from the grids to produce images with different aperture sizes. Averaging a large number of images mimics cameras with a large aperture while averaging a small grid of images mimics cameras with a small aperture.