Depth Refocusing

Simply averaging all photos given in the lightfield dataset will result in a image in which it looks like only the one of the image is in focus. This happens because, across all lightfield data, only one part of the object being photographed is in focus.

But, we can shift the images such that we control which part of the image is focused.

The way I went about shifting images was by computing the mean x and y camera coordinates, then shifting the image by (im_x - avg_x, im_y - avg_y) / div, where div is changed to change where the image is focused. For example, in the Chess dataset, my `div` varied from 10 to 2.

Chess (unshifted)

Mid-Top (div=10)

Mid (div=6)

Mid (div=4)

Mid (div=2)

Chess

Bracelet (div=2,4)

Chess (div=-4,-2,2,4)

Changing the Aperture!

This becomes a question of how to make the "blurry" part of the image more or less blurry? This is solved by averaging over a smaller grid of images, since averaging over a small grid will yield less difference in the blurry part.

For this project, I averaged over images in the the 10 grid, 12 grid, and the 14 grid. Here, I define the N grid to say that these contain images with both indeces less than N and greater than 16-N. So, the 10 Grid contains images from (9,9), (9,8), (9,7), (8,9), (8,8), (8,7), (7,9), (7,8),(7,7).

The 10 Grid

The 12 Grid

The 14 Grid

All Images

Bracelet

Chest

Reflections

This project was cool! I thought it was a bit weird to play around with values to shift by until I found the one that worked (for depth refocusing). I found this project to be the most (hacky). I learned a lot about the power of data and lightfields to see cool things, and change images after the fact!