CS194-26 Proj5: Lightfields

Brian Aronowitz: 3032201719, cs194-26-aeh

Part 1: Depth Refocusing

In part 1 we implement depth refocusing. This can be done by taking all the camera positions in the gantry, and essentially simulating what would happen if there was a single camera focused on an object. You can do this simulation by taking the camera positions, and shifting their images then averaging the total images to obtain a simulated DOF image. To compute shifts, you simply calculate the average position of the cameras, (which all have offsets relative to a reference point), then shifting their images based on how far way they are from the reference. More generally, this is t * (cam_center-cam_av), where t is a variable controlling the shifts. Below are some results.

Amethyst: t varying from -.5 to .5
Legos: t varying from -.5 to .5

Part 2: Aperture Adjustment

In part 2 we implement aperture adjustment. This simply involves only including pixels from cameras that are inside the radius around the center of the grid. This allows for simulation of DOF. The lower the radius, and thus the less camera pixels you are averaging, the less the effect of DOF, as there is less variation of pixels and therefore less blur. Below are some results.

Lego: varying radius from 0-20 grid units
Tarot: varying radius from 0-20 grid units

Summary

This was an interesting foray into lightfields, I'm thinking about exploring them more for my final project, as my favorite feature of them (the panning left and right) wasn't implemented in this project. Seeing refractions and reflections change in a lightfield when you move around is quite crazy to me. There's a lot of cool research being done into lightfield videos and compression, as well as VR display of lightfields, and I'm excited to see how it pans out in the next decade or so.