Given the Stanford Lightfield Array images, we have a rich set of images taken slightly displaced from each other, giving us an ample amount of depth data. We will extract the relative positions of each image embedded in the file name and use that to focus depth at various positions.
We start by taking the naive average of all of the images without any shifting. This should provide a depth effect with the foreground blurred and the background in focus.
For our own sake, we will compute the average position of the camera and use that to center the camera's coordinate positions of each image.
For our refocusing procedure, we will use the provided grid structure of the lightfield array to calculate the displacement from the center. By parameterizing this displacement, we can align the images to focus on different sections. Our ideal range from bottom to top is $[3,1]$.
The images of different focuses are provided below.
Here we will simulate an artificial aperature by filtering and averaging images in a perpendicular radius of a chose size. As we increase the simulated aperature, we expect the imgur to blur around the edges, as in a real camera that would become unfocused.
For demonstration purposes, we have set the depth of focus to be in the middle of the chessboard.
I learned that it is surprisingly straightforward to use an array of images to achieve remarkably cool effects, and through the combination of many similar images, we can actually simulate camera features.