In ths project, we use lightfield data taken by a 17x17 grid of parallel cameras to simulate depth refocusing and artificial changes in aperature size. Using basic linear algebra, we can combine all of these *slightly* different images to achieve cool effects.
We average all the images based off certain alignments to sharpen/blur specific parts of the image
(foreground/background). To do this, we collect the coordinates of the camera per image and shift it towards the center
image [8,8] with a specific constant alpha to achieve the desired results:
Shift each image by [alpha * (centerX - x), alpha * (centerY - y)] for all x,y in [0,17] and alpha in [-0.5, 0.5]
alpha = 0 |
alpha = 0.1 |
alpha = 0.25 |
alpha = 0.5 |
We simluate different apertures by changing the radius of cameras around the center camera. We average all the images taken by the cameras with a euclidian distance less than radius between the center for all radius in [0,8].
Radius = 0 |
Radius = 2 |
Radius = 4 |
Radius = 7 |