CS 194-26 Project 5: Lightfield Camera

Nick Titterton cs194-26-ago 2018 October 30

Overview

Suppose you have a 2D array of cameras that all take a photo of an object at the same time: each will have a slightly rotated and shifted perspective on the image. If you were to combine the images together, the far away objects, since they don't vary in position much from one side of the camera array to the other, would be clear, while closer objects would be blurry. Is there any way to exploit our abundance of data to manipulate some of these effects?

Refocusing an image

If you have the measured offsets between cameras in the array, you can do a simple shift of the image to bring closer objects into focus (and in turn more distant objects out of focus). Fully correcting for the offset would make the whole image blurry (the region directly in front of the camera would be aligned, but chances are your object is further away), so you have to tune this parameter.

For the chess image above, I zero-averaged the offsets, then multiplied them by an alpha value between -0.1 and 0.5. Then I shifted and combined the images. As the alpha goes from -0.1 to 0.5, the focused region of the image comes closer, as expected.

Adjusting the aperture of an image

Larger aperture (the size of the hole for an image, determining how much light comes in) creates a less sharp image, and smaller aperture a more sharp one. We can use a sort of "discrete" aperture by averaging a certain radius of photos from the camera array.

In the above gif, I used a radius of 1 through 8 in my 2d array: the radius=1 case averaged 5 photos (the center and 4 adjacent), then 13 photos for r=2, and so on. The whole array was 17 by 17, which is why I went up to radius 8.

Summary

This project gave me good conceptual practice with focus in an image and especially aperture. I also learned how important the order of spline interpolation is for runtime!