Part 1: Depth Refocusing
In this part of the project, we want to simulate focusing on a captured scene at different based on lightfield data we have from the Stanford Light Field Archive . To do so, we need to shift the images ‘appropriately’ and average them to simulate focusing on an object at different depths. The depth of our focused point will be a decreasing function of our image shift. The more we shift the image, the closer the objects in focus will be.
To do this, I first calculated a central coordinate (center_x, center_y)
for our 17x17 image grid. Then, I shift each image at (x, y) by (center_x - x, center_y - y)*scale
, where scale
is some constant scaling factor that specifies the depth at which the refocused object appears. When scale=0, no refocusing appears, so the background is more focused than the objects. Objects in the background do not change much as we shift the image, while the clarity of objects in the foreground change drastically as we increase our scaling factor.