CS 194-26: Image Manipulation and Computational Photography, Fall 2017

Project 5: Lightfield Camera

Justin Mi, CS194-26-afy



Overview

This project introduces us to the lightfield camera, which not only captures the intensity of the light in a scene like a regular camera, but also captures the direction that light rays are traveling in. In this project, we are going to use the light field data provided by Stanford's Light Field archive to achieve two effects: one, to be able to generate images that focus on the same scene at different depths; two, to be able to do the previous part while adjusting the aperture size of the viewer.




Depth Refocusing

In part 1, we use a set of 289 images of an object taken in a 17 by 17 grid to create an effect of focusing on different parts of the object. Since each image is taken from a slightly different angle/location, when the images are averaged together (see the c=0.0 image) the focal point of each image is clear while the areas in front and beyond it are blurry. However, by algorithmically shifting the images to match different focal points, we are able to shift the focused area of the object. We roll each image by c*(v - y), c*(u - x), where y and x belong to the center image, v and u belong to the image being shifted, and c is a weight that controls the strength of the shift.

c = -0.1
c = 0.0
c = 0.5
Effects of changing c from -0.5 to 0.5



Aperture Refocusing

In part 2, we will simulate having an aperture on a single common point using the lightfield camera. Given the 17x17 grid, averaging the images in a large grid mimics having a larger aperture because there are more light rays to process coming from all the images. However, averaging the images in a smaller grid mimics having a smaller aperture because there is less light rays to account for, since there are less data from less images. To implement this, we set the image at (8, 8) be the center image of the grid, and select which images to average by accounting for a radius of images around the center. This radius is square, so for example a radius 1 aperture will include images (7, 7), (7, 8), (7, 9), (8, 7), (8, 8), (8, 9), (9, 7), (9, 8), (9, 9).

radius = 0
radius = 3
radius = 6
Effects of changing radius from 0 to 6



Summary

I learned a lot about how lightfields work from this project. It really blew my mind that there was an image capturing technique that relied on the fact that by capturing more data than just a single frame image, you can post-process the images using all this extra data to generate new or enhanced perspectives of a single scene. This idea is even being implemented in smartphone cameras today. Recently, the Pixel 3 phone by Google released, and it has a feature where it takes the extra data generated by the natural "jitter" of your hands while taking a photo to add extra resolution to the image. It's acting like a mini-lightfield by capturing minute differences in frames generated by the natural shaking of the phone. That's pretty dope.