CS 194
By Won Ryu
In this paper (http://graphics.stanford.edu/papers/lfcamera/lfcamera-150dpi.pdf) by UC Berkeley Professor Ren Ng who also founded Lytro camera, there are benefits of capturing multiple images over a plane orthogonal to the optical axis as we can use those multiple images to simulate some properties of the camera after taking the images using operations such as shifting and averaging. This project is about using the lightfield camera data which are multiple images taken over a regularly spaced grid to reproduce the effects that a camera can make.
Depending on the focal length of the lens, a camera can have different depth of focus. With lightfield camera data, we can achieve a similar effect by refocusing the depth after taking the images. This is possible due to that objects which are further away from the camera do not vary their position significantly when the camera moves around while not changing the optical axis direction. However, for the close objects, their position changes much more across the lightfield camera images. As a result, averaging all the lightfield camera images without any shifting will produce an image which is sharp for the objects that are further away but blurry for the objects close by. Also, shifting the images to the center lightfield camera image (the middle one in the grid) and then averaging makes the close objects to be sharp and the ones that are further away to be blurry. Depending on how much the shift occurs will determine which depth the focus is on.
As a result, the depth refocusing was done by using the x and y positions of the center image and calling it midX and midY respectively. Then for each image, I called the x and y position as X and Y respectively. Then I would shift the image by c(X-midX) in the horizontal and c(Y-midY) in the vertical direction. Finally I took the average of the shifted images. The parameter to vary here was c which would determine how much shift would happen which would determine at which depth the focus would occur.
The range of c where the image had a part that was in focus was between -0.15 and 0.65. Here is the gif of images between those range:
This is the gif. If the gif appears to not be moving, it has finished playing. Right click and Open Image in New Tab. Then hit refresh and it’ll start again.
Here is also a sample images when c = -0.05
Here is also a sample images when c = 0.45
This is the gif. If the gif appears to not be moving, it has finished playing. Right click and Open Image in New Tab. Then hit refresh and it’ll start again.
Here is also a sample images when c = 0.05
Here is also a sample images when c = -0.45
Just like a camera being able to adjust how blurry the parts that are not in focus are by changing the size of the aperture (large aperture the more blurry and smaller aperture less blurry), we can adjust how blurry the parts that are not in focus are with the lightfield camera image data. When we include more images from the image dataset to average, we get a more blurred image due to the slight differences in the position of the objects. As a result, the more images we include the shift and averaging process, the blurrier the parts that aren’t in focus will be. In order to simulate the effects of adjusting the aperture, we will follow the same procedure as before for depth refocusing of shifting and averaging the images. However this time, we will pick a constant c for all images as this will keep the object in focus the same and instead the parameter we will change which will act as changing the aperture will be r which will be related to the number of images included in the shifting and averaging to create the processed images. When r is 0, it means only the middle image is used and as r goes up by 1 we add two more images (one from left and one from right) to the subset of images we use to make the processed image.
This is the gif. If the gif appears to not be moving, it has finished playing. Right click and Open Image in New Tab. Then hit refresh and it’ll start again.
Here is also a sample images when r = 2
Here is also a sample images when r = 90
Another example
This is the gif. If the gif appears to not be moving, it has finished playing. Right click and Open Image in New Tab. Then hit refresh and it’ll start again.
Here is also a sample images when r = 1
Here is also a sample images when r = 35
Now that this was done with data from a lightfield camera, I wanted to see if it was possible for me to collect my own data to perform these effects. I used an iPhone for my camera and I drew up a 5X5 grid of points with each point being 5cm from each other in width and height.
Then I placed the corner of my iPhone on each of the points and took a photo from each of the 25 points. With these data I performed the Depth Refocusing and Aperture Adjustment as before.
This is the gif. If the gif appears to not be moving, it has finished playing. Right click and Open Image in New Tab. Then hit refresh and it’ll start again.
Here is also a sample images when c = 117.5
Here is also a sample images when c = 159.5
This is the gif. If the gif appears to not be moving, it has finished playing. Right click and Open Image in New Tab. Then hit refresh and it’ll start again.
Here is also a sample images when r = 0
Here is also a sample images when r = 12
These pictures I took myself worked decently. It is definitely not as sharp as the lightfield camera data and that is likely due to that when I took these images by hand the images were not perfectly all equally separated by 5cm in width and height. Also it was hard to keep the iPhone straight with no rotations between the images and as a result between the images there were some slight rotations that were unintended. Due to imperfections of rotations and slight shifts, the pixels were not proportionally aligned even when shifted making even the parts that should be sharp a bit blurry. Also since I took 25 pictures in a 5X5 grid, I only had 25 images as opposed to 289 images from the lightfield camera data which made the aperture adjustment not a lot of fine adjustment possible.
I learned that these effects that you can have on photos by adjusting settings on a camera can actually be applied using shifting and averaging when there is a grid of images on the same view.