CS194-26 Project 5

Kunal Munshani

Depth Refocusing

The original image

What's going on

We have a Light Field data set for an object. This means we have multiple pictures of the object. Each picture is taken from a different position along a grid. If we just add all the images together we get something like this

Before

Simple averaging the points

The foreground is blurry because it changes a relatively large amount in response to small movements of the observer(camera) where as the background doesn't change much at all. See here

Instead, if we align the images and shift them based on the position from which they were taken, we get this.

Aligned

If we change the shift photos undergo, to have them line up at different points we can change the depth of field. In the results below C ranges from -1 to 4

Aperture Adjustment

What's going on

Adjusting the Aperture involves adjusting the amount of light that's let into our image. We change the aperture by simply changing how many images we work into the average.

Like Part 1 all the images are aligned around a certain point with a certain base image. As we include more images, beyone this base image, into our average we simulate increasing the aperture. Increasing the aperture blurs everything out of focus as it lets in more 'stray' rays of light. We're doing the same thing by adding in all these different images that, because of their different original positions, took in different 'stray' rays of light

The image on the left is aligned, the one on the right not

In conclusion this project was a great introduction to the power of lightfield data. As always in this class, I'm amazed by how much we can do with a little math and a little CS.