In this project, we will be using lightfield data to create images with effects such as depth refocusing and aperture adjustment.
Given an array of images taken at slightly offset locations, we can merge them together to create an image focused at one point.
If we simply average all of the input images, the points where there was not as much change (further from the cameras) will be
more in focus, while the points closer to the cameras will have changed more drastically across the different cameras,
leading to a blurrier image. We can controll where the output image is focused to by shifting the input images by a
certain factor, which we can determine by finding the difference between its (u, v)
position in the 17 x 17
grid used to produce the input images. We find the offset between each image and the center (8, 8)
and by shifting
by some multiple c
of this offset, we can align different portions of the images to each other, making the image focused at different
locations.
c=-0.1 |
c=0.0 |
c=0.4 |
---|---|---|
In photography, a higher aperture leads to a smaller depth of field, meaning a smaller portion of the image is in focus, while objects in the foreground and background are blurred. We can simulate different apertures by averaging across more images in the given set, which leads to blurring since the cameras are slightly offset. We can use some of the logic from the previous part to focus the image in the center, and then average across more and more pictures radiating from the center.
r=0 |
r=4 |
r=8 |
---|---|---|