CS194-26: Image Manipulation and Computational Photography

Lightfield Camera

Denis Li, cs194-26-aem



Overview

In this project, I used lightfields to refocus and change the aperture after photos are taken.

Depth Refocusing

Objects which are far away from the camera do not vary their position significantly when the camera moves while nearby objects, on the other hand, vary their position significantly across images. Averaging all the images in the grid without any shifting will produce an image which is sharp around the far-away objects but blurry around the nearby ones. But if we shift each image proportionally to their distance from the center image, we can change the focus to be on different depths.


Image
Refocusing through shifting

Aperture Adjustment

The aperture is the size of the opening on the camera lens for light to come in. We can simulate aperture changes by changing the number of images we average, depending on their distance from the center image. So averaging all the images will results large aperture while averaging only images close to the center image will result in a small aperture.

Changing aperture

Bells & Whistles: Interactive Refocusing

Given a point, we can search through different search values to see which values best aligns the images at that region. To speed things up, instead of aligning all the images, we can just try on 2 images and use normalized cross coordination because if 2 images are aligned at a point, then all the images will be aligned at that point.


Point selected to focus
Focused at point

Summary

Lightfields are pretty cool! They retain a lot more data about an image that can be used to manipulate focus and aperture. But the data is harder to gather and refocusing is computationally intensive.