berkeley logo Programming Project 5 (proj5)
CS194-26: Image Manipulation and Computational Photography

Red-Green-Blue ExampleRed-Green-Blue Example
Lightfield Camera:
Depth Refocusing and Aperture Adjustment with Light Field Data
Due Date: October 31, Tuesday, 11:59PM, 2017

Overview

As this paper by Ng et al. (Ren Ng is the founder of the Lytro camera and a Professor at Berkeley!) demonstrated, capturing multiple images over a plane orthogonal to the optical axis enables achieving complex effects (see this gallery - hover around different part of images) using very simple operations like shifting and averaging. The goal of this project is to reproduce some of these effects using real lightfield data.

Details

The Stanford Light Field Archive has some sample datasets comprising of multiple images taken over a regularly spaced grid. You can use any of the available datasets for this project (use the rectified images). You are also encouraged to go over Section 4 of the paper mentioned above to truly appreciate how a very simple idea of placing multiple sensors combined with elementary operations can lead to such beautiful results.

The project has the following two parts, both of which will use the data above -

1) Depth Refocusing (30 pts):

The objects which are far away from the camera do not vary their position significantly when the camera moves around while keeping the optical axis direction unchanged. The nearby objects, on the other hand, vary their position significantly across images. Averaging all the images in the grid without any shifting will produce an image which is sharp around the far-away objects but blurry around the nearby ones. Similarly, shifting the images 'appropriately' and then averaging allows one to focus on object at different depths.

In this part of the project, you will implement this idea to generate multiple images which focus at different depths. To get the best effects, you should use all the grid images for averaging. The effects should be similar to what you observe when you change the depth of focus in the dataset's online viewing tool (example) under the 'Full Aperture' setting.

 

2) Aperture Adjustment (20 pts):

See this image of San Francisco which was produced by averaging multiple images collected using a satellite. Averaging a large number of images sampled over the grid perpendicular to the optical axis mimics a camera with a much larger aperture (can you think why ?). Using fewer images results in an image that mimics a smaller aperture. In this part of the project, you are required to generate images which correspond to different apertures while focusing on the same point.

 

3) Summary (0 pts)

Tell us what you learned about lightfields from this project !

 

Deliverables

For this project you must turn in both your code and a project webpage as described here.

This assignment will be graded out of 50 points, as follows:

Bells & Whistles (Extra Credit)

Acknowledgements

We thank Frank Dallaert for helping in adapting this assignment from his Computational Photography class