Project 5: Lightfield Camera

Ankit Patel, Fall 2017

Overview

The premise of this project was to explore light field cameras, and the interesting computational photography problems that could be solved using them.

Depth Refocusing

In this part, we implemented a simple algorithm to refocus lightfield photos on various depths. The algorithm basically works by the assumption that the far objects don't move much between different images in the lightfield. If you want to focus on the far objects, you average the images in the dataset. If you want to focus on closer objects, you shift each image towards the center and average them again.

The images below are arranged left to right, top to bottom, from the closest focus, to the farthest focus.

Here is a gif showing the slowly changing focus

Aperture

Adjusting the aperture is a relatively simple extension to the previous algorithm - Instead of shifting and averaging all the images, we adjust and shift only images in a certain radius of the center. The smaller the radius, the wider the depth of field we get, since fewer images always means less distortion

The images below are arranged left to right, top to bottom, from the widest depth of field, to the narrowest.

Here is a gif showing the slowly changing aperture.

Conclusion

After working with lightfields, I'm confident that these algorithms could be implemented pretty easily in real life systems - and the results are fairly spectacular. Why aren't lightfield cameras more popular in real life? Are they expensive and/or difficult to manufacture?