Using photos from the Stanford Light Field Archive, we mimic camera settings with computations, performing depth refocusing and aperture adjustment.
To refocus depth using all of the images in a grid, we first choose a reference image. For a 17x17 grid, I chose the image at [8, 8]. If we average all the images together without shifting, we produce an image that is more sharp for far-away objects and blurrier for closer ones. If we find the offset between images, and scale by a factor c, and shift an image by that value, we can focus at different depths. For the below images I used c values from -0.1 to 0.6. The smaller c values focus further back while larger ones focus closer to the front.
Left to right: c= -0.1, 0, 0.1
Left to right: c= 0.2, 0.3, 0.4
Left to right: c= 0.5, 0.6
Gif from c=-0.1 to 0.6
We can mimic adjusting a camera's aperture size by averaging less or more images from the lightfield data. As we can see, using less images produces a more clear image, which is similar to using a smaller aperture. More images mimics a larger aperture that lets more light in and produces some blurring. To mimic this adjustment, we use a similar algorithm from the depth refocus, except we control the amount of images we use by specifing an allowed window from the center reference image. I chose the image at [8, 8], and we only take the average of the images with a radius w from the reference, so images within [8+/-w, 8+/-w]. For the images below I kept c= 0.2.
Left to right: w= 0, 1, 2
Left to right: w= 3, 4, 5
Left to right: w= 6, 7, 8
Gif from w=0 to 8
Applying some simple operations on lightfield data can produce effects that mimic some camera settings.