Proj 5: Lightfields

Vivian Liu, cs194-26-aaf

Learning about Lightfields

From Ren Ng's paper on Lightfield Photography with a Hand-held Plenoptic Camera, I learned that operations as simple as translation and averaging can simulate digital refocusing.

For starters, Ng et. al. built a lightfield camera that had a main lens and a microlens. From a ray model perspective, the main lens made up the u plane, and the microlens made up the s plane. They collected data to create a lightfield that could be defined by L(u,v,s,t), and the goal was to create a model for synthetic photography, which involved L'(u',v',s',t'). u', s' were the virtual planes created after the rays passed through the main lens and the microlens respectively.

From physics, there is an irradiance image equation that Ng et. al. approximated in the paper. It states that image value is a function of aperture, ray angle of incidence, and ray incidence on the two planes (aperture, film).

Using the following diagram, they established a connection between the lightfield function for L and the lightfield function for L', and they rewrote the irradiance image equation with these relationships, giving the following equation for synthetic photography.

In digital refocusing, because only the synthetic film plane moves, the first two terms in the L function simplify to just u' and v'. What this means is that by just fixing u',v' and letting s',t' vary by α , the scalar that represents the distance between the u'/synthetic and the u/real plane, you can create digital focusing.

To implement that, all we need to do is sum over the subaperture images which will vary by some value that is a function of s',t', and α. In doing this, we are picking some u',v' as our reference (to keep in focus). We allow the rest to go out of focus from the blurring that is a result of averaging.

Using the chesspiece dataset with strengths from 0 to 1, this is the digital refocusing sequence that I created.



Aperture Adjustment

Adjusting aperture was simulated by averaging over a subset of the subaperture images. We narrow or widen our aperture by changing a threshold, which I defined to be the difference between the base/reference grid coordinates and the subaperture grid coordinates.

What this means is that we sum over more or less windows in our refocusing: less windows if we want a smaller radius/ aperture, more windows if we want a larger radius/ aperture.

Here is the aperture adjustment sequence that resulted with an increasing threshold. As you can see, the circle of confusion grows larger as we average over more windows/subaperture images, and particularly in the front, which is more affected by changes in perspective than the back.



What I Learned

Aside from all the technical details above, on a higher-level, I learned about how light can be represented by calculus, and how that calculus can be approximated by some elementary math--that was very cool! I also learned about the plenoptic function, which seemed incredibly sci-fi until this project grounded it in some applications.