Project 5: Lightfields

By: Michael Gibbes

This project deals with capturing more data in a picture than just 1 sensor can collect. In fact, pictures from this Stanford database contain data collected by 289 sensors, stored in 289 JPGs.

Part 1: Depth Refocusing

Here is an example of what can be done by averaging the pictures taken from a single lightfield camera shot in just the right way:


light
Averaged Lightfield
light
Center-Shifted Over Range (-0.75, 0.25)

The algorithm for generating one shift frame is as follows:

  1. Gather all the (u, v) coordinates corresponding to each image subaperture. These are stored in the filenames for this particular database.
  2. Compute the central point by averaging all the points to get (mean_u, mean_v).
  3. Using np.roll, shift the image channels by (alpha * (u - mean_u), alpha * (v - mean_v)), where alpha (type float) element of (-1, 1) controls the depth focus. Larger alpha -> nearer viewpoint focus.

Here are some other results generated by this algorithm taken from the same database.


light
Chess (-0.25, 0.75)
light
Amethyst (-0.5, 0.5)

Part 2: Aperture Adjustment

This next part simulates increasing or decreasing the size of the aperture used to collect light. The relationship between aperture size and depth of field is larger aperture size -> narrower depth of field. Observe that emulating increasing aperture size makes the picture approach the "average" jelly bean picture as calculated in Part 1.


light
Jelly Bean Aperture Variance

To reproduce the effect, I "filter" the pictures out by a minimum beta constant between 0 and 1. beta represents the fraction of the maximum distance (the point furthest from the center) at which pictures will be accepted. The smaller the beta, the fewer pictures averaged and the "larger" the aperture.


light
Chess Aperture Variance
light
Taro Aperture Variance

In Conclusion...

You could combine both of these methods (focus AND aperture adjustment) to achieve many different styles of pictures. The beauty of lightfields lies in mutable artistic choices that most photographers have to make while taking their photos with normal cameras.

The great takeaway from this too is that you don't have to have fancy equipment to reproduce the effect seen in the Stanford database. You need only a collection of photos taken from slightly different angles with a small aperture size to capture as much sharpness at all depths as possible. The only difficulty is in obtaining the (u, v) coordinates of the custom subapertures.