Image Dataset


Let's take a look at some of the individual images from the dataset. Here are all the images from the first row of cameras -- as you can see, as the GIF progresses, we are viewing images increasingly taken from the right side of the image grid.


Now, we can view the images taken from a particular "column" of cameras (in the GIF below, we look at column 8, from bottom to top):

Depth Refocusing


Let's say we want to refocus the image — i.e. show the foreground clearly while the background becomes blurry, or vice versa. With lightfield data, we can do this as a post-processing step!

Given our 17-by-17 camera grid, let's take a camera close to the center and use that as our fixed observation point. Because we have the camera positions, we can compute the relative offset between any two cameras. For example, a camera near the center (let's say camera (8, 7)) captured an image that is fairly similar to the image captured by the center camera (8, 8). However, the image from camera (0, 1) was taken at a completely different position, and thus captured the scene from the different angle. In order to compensate for this, we can shift the image from this camera by some amount. We can use the millimeter difference in their (x, y) positions, scaled by some alpha factor, as the shift amount. By using a multiplicative scaling factor, we preserve the relative shifts, ensuring that cameras further from the center are shifted proportionally more to compensate for a larger difference in perspective.

Then, taking all these scaled images and averaging them produces an image that shows a particular depth. This is highly dependent on that alpha value mentioned above. A lower alpha (e.g. 0) will show the background of the image in clear focus, which makes intuitive sense -- objects in the background are less prone to moving as the camera/observer moves. For example, if you took a portrait of someone with the Sun behind them, and then moved a foot to the left and took another picture, the person will appear to have shifted a lot while the sun will not move (in relation to other objects in the background). This is because the light rays from the Sun are essentially parallel.

As we scale our alpha up — all the way to around 0.5, in the gif above — we shift the area in focus closer to the observer. By playing with this alpha value, we can choose where in the image we want to be in focus.

Aperture Adjustment


Instead of using our entire set of 289 images from the 17-by-17 grid, let's consider what would happen if we only used images from the center 13-by-13 (and say all these images were focused on the same point in the image). A larger amount of the area around the focal point of the image will be clearer, while a smaller portion of the periphery of the image will remain blurry. As we decrease the amount of the grid that we consider, more of the area around the focal point comes into focus. Therefore, we can adjust the radius of the image that is in focus by adjusting how much of the 17-by-17 grid we use to compute our focused image. This simulates a smaller aperture, taking in less light and gaining focus.

One note is that we of course want to fix a focal point (i.e. fix our alpha parameter) when computing the different radii, as we want the "center" of the radius to be in the same location. The gif above uses a fixed alpha of 0.3.