Lightfield Cameras
Daniel Geng
Depth Refocusing
Camera arrays take multiple images of a scene from different positions on a vertical plane. Simply taking the average of all the images from a camera array gives us an image that is focused to a point far away, because points far away from the camera don’t move much while points closer to the camera array moves as we change the position of the camera.
Simple average of all images from the camera array
We can refocus a camera-array image by shifting the images an amount proportional to how far away the corresponding camera is away from the center and then taking the average of the shifted images. For example, if we let and denote the center camera position, we would want to shift the image of the camera at position by in the x direction and in the y direction (where is chosen empirically). This works because we are essentially shifting the images such that objects at a certain distance are all aligned.
Depth refocusing, full aperature
Depth refocusing, half aperature
Aperature Adjustment
To produce images with different aperture settings we average just a few images towards the center of the camera array. The fewer images we average the sharper the image is and the smaller the effective aperture is. This works because if we had been using a real camera, a smaller aperture would have resulted in the rays from the edge images being blocked.
Adjusting the aperture
Fisheye Camera
Because we are given light field data we know basically everything about the light in the scene. Therefore, we can actually simulate the effect of different lenses on the scene with a bit of physics. For example, we can simulate a fisheye lens with half aperature at different focuses:
Depth refocusing using a fisheye lens