Lightfield Camera

In this project, we reproduce some effects of using a real lightfield camera through Python. We explore 2 parts -- depth refocusing and aperture adjustment. The data used in this project will be a set of images from Stanford's Light Field Archive which can be found here: http://lightfield.stanford.edu/lfs.html.

Part 1: Depth Refocusing

Average of all images image.png

Focus on floor image.png

Focus on front 2 legos image.png

Focus on back lego image.png

Focus on background image.png

Lego and Ball gifs for refocusing:

In this part, I used a collection of images which were taken by a camera positioned at a particular place on a grid. Because of this, both the ball and the lego's image sets are 17x17, consisting of 289 slightly shifted images. The whole purpose of this part is to align images by shifting them based on some parameter. It is important to note that when objects are further away from the camera, they will relatively be in the same position after the camera shifts. Closer objects, however, will relatively "move" more than the background objects. For this part, I picked the center and shifted images that were away from the center by the delta in both coordinates multiplied by a scale factor. Note that when the scale value is 0, we are simply averaging all images. The scale values I used were -3 to 6 for the legos and -10 to 10 for the ball.

Part 2 Aperture Adjust

Aperture is the opening of the camera's lens, which dictates how much light is allowed into the camera. The larger the aperture, the smaller the depth of field, and thus the fewer amount of parts in focus. The smaller the aperture, the larger the depth of field, and the larger amount of parts in focus. Therefore, what I did in this part was define a radius from the center that determines how many images I take. For example, a radius of 2 with a center at (8,8) would mean I would take images forming the square (6,6) to (10,10). The results are seen in the gifs below:

lego aperture:

ball aperture:

In these gifs, I started with a small radius and then slowly expanded it. This is similar to going from a small aperture to a large aperture. As mentioned before, wiht a super small aperture, we'd expect that there is a larger depth of field, and this is true, since a good portion of the image is in focus. As we increase the radius, and thus the aperture, more blur can be seen.

Summary

I thought this project was pretty cool. Even though it seems like it didn't take too long, I think seeing how we are able to create the refocusing and aperture adjustment simply through a dataset of a bunch of pictures with slightly shifted perspectives was really cool. I checked out the iPhone app which was also mentioned in the spec, and a new question I had to explore is: how can this refocusing effect be done when the dataset is small (ie, for the app to work, you simply have to jiggle the camera while taking a picture)?

In [ ]: