Depth Refocusing and Aperture Adjustment with Light Field Data: CS194-26 Fall 2018 Project 5

Nikhil Uday Shinde (cs194-26-aea)


Project Outline

The goal of this project was to use Stanford's Light Field data to generate various effects. Stanford's light field data was generated using a grid of cameras that have a known displacement from one another. Such a grid of cameras enables us to sample a larger portion of the plenoptic function than just one traditional camera. With this information we can computationally generate effects such as depth refocusing and aperture adjustment after all the images have been captured.


Depth Refocusing

Objects that are far away from the camera vary their position more than objects close to the camera when the camera position is varied while keeping it on axis. We can use this fact to create artificial depth refocusing by aligning images taken by cameras at diferent positions "appropriately" and then averaging them.
To focus at different depths we first start by selecting what we will hold to be the central image. Here we chose the camera [8,8] , which is roughly at the center of the camera grid of size [17,17]. We shift every remaining image from the light field to align it with this central image using the following equation: [xshift, yshift] = C*([xpos of center, ypos of center] - [xpos of image, ypos of image]).
Using this equation we can alter "C" to calculate shifts of differing degrees, which allows us to focus on different parts of the image. Making "C" more negative we focus in the back of the image while a more positive "C" focuses to the front of the image. All shifts were rounded to the nearest integer for computational simplicity.

Depth Refocusing GIFS
Lego: C from -0.75 to 0.75 100 frames
Lego: C from -0.75 to 0.75 100 frames full resolution

Chess: C from -0.5 to 1.0 150 frames
Chess: C from -0.5 to 1.0 150 frames full resolution

Bracelet: C from -0.5 to 1.0 150 frames
Bracelet: C from -0.5 to 1.0 150 frames full resolution
Beans: C from -0.5 to 0.5 100 frames
Beans: C from -0.5 to 0.5 100 frames full resolution


Aperture Adjustment

We can create the illusion of an image from a camera with a different aperture by choosing the images that we blend together. Blending fewer images together creates the appearance of a smaller aperture, meanwhile blending more images together creates the appearance of a larger aperture.
To create images that appear to be from different apertures we start by choosing a central image. Here we choose the image from camera [8,8] as that corresponds to center of the [17, 17] camera grid. We then start taking images in a circular radius around the central image and blending them together. Any image within the specified radius is used in the averaging to create the final aperture adjusted image.

Beans: Aperture with radius 0 to 12 full
Beans: Aperture with radius 0 to 12 full resolution
Lego: Aperture with radius 0 to 12 full
Lego: Aperture with radius 0 to 12 full resolution



Website template inspired by: https://inst.eecs.berkeley.edu/~cs194-26/fa17/upload/files/proj1/cs194-26-aab/website/