Project 5: Lightfield Camera

Michelle Hwang, cs194-26-aaj

Part 1: Depth Refocusing

The Stanford Light Field Archive contains light fields - images that we can use to simulate shifts of focus in a single image. Each "image" actually contains 289 images taken by cameras on a 17 by 17 grid approximately equidistant from each other. Each image is labeled by its position in this grid as well as its absolute x, y distance from a fixed point. We can use a combination of shifting and averaging of these images to simulate refocusing.

To do this, we note that items that are very far away from the cameras will appear to be in relatively the same absolute x, y position in each of the 289 images, while items that are very close to the cameras will appear to have very different absolute x, y positions for each of the 289 images. We note that we can simulate focusing on objects very far away by simply averaging all 289 of our images - items that are far away will be averaged "with themselves" and so will appear clear, while items that are closer will be averaged with "shifted" versions of themselves and therefore will appear blurry.

We simulate focusing on closer items by shifting all the images toward a fixed point, and then averaging them. We select an image to stay fixed, in my case the image in the center (image 8, 8), call this C. Then, for each image, S, we calculate the difference in absolute position of the camera for C and S, i.e. dx = Cx - Sx and dy = Cy - Sy where Cx is the x coordinate of the absolute position of the camera for the center image and so on. Then we shift each image in the x direction by adx and in the y direction by ady where a is some constant (for the chess example, values ranging from 0 to .7 were good) with a higher number representing refocusing on closer images. We then average all the shifted images together.

Results

Input Images

Image at (8, 8) (coordinates are y, x):

chess 8, 8

Image at (8, 14):

chess 8, 14

Image at (11, 11):

chess 11, 11

These are just some example images from the 17 by 17 grid.

Final Results

Far focus (a = 0):

refocus 0

Mid focus (a = .2)

refocus .2

Near focus (a = .6)

refocus .6

Note that the artifacts (streaky lines) are likely from the method we used to shift images which involved interpolation. This can be fixed by using better interpolation methods.

Gif'd results

refocus gif

Part 2: Aperture Adjustment

We can model adjustment of aperture by using a similar averaging techniques as before, but instead of shifting images to model different focus points, we adjust the "radius" of images to simulate adjustment of aperture size.

We note that by averaging many images that are taken from a wide variety of points of views, a small area of the resulting image will actually be properly aligned across most images and appear sharp. However, if we average fewer images from a smaller amount of points of views (i.e. the range of absolute x and y camera positions is smaller), then a larger area of the resulting image will appear to be properly aligned across images and appear sharp. Therefore we can model a large aperature by averaging images that come from cameras with a wide range of x, y positions and decrease aperature by reducing our range. We then generated these images by averaging all 17 x 17 images, then reducing the radius by 1 image to average the 15 x 15 center images, then the 13 x 13 center images, and so on.

Results

Final Results

Averaging 17 x 17 (large aperature):

aperature 17

Averaging 9 x 9 (medium aperature):

aperature 9

Averaging 3 x 3 (small aperature):

aperature 3

Gif'd results:

aperature gif

Summary

Lightfields are very powerful! They let us create new images (e.g. change aperature and focus) without needing to physically recreate a photo - we can do it in post production instead. Having a program that can automatically allow photographers to manipulate their images in this way could be very powerful in perfecting their photos and deciding (possibly even years) later how they would like them to look.