Using a lightfield simulated by taking pictures of a scene with each camera in a 17x17 grid, we can focus on different depth of the image by combining these 289 images in different ways. By shifting images different amounts to come closer to the center, different fields of depth in the final image will be clearer depending on which depth lined up the best using our shift. We can effectively change the depth by changing a scaling variable that multiplies the offset of an image to the center.
|
|
Here, the idea is to recreate an increase in aperture (closer objects more clear, further images more blurry). This is done by taking a subset of all the images in the light array (e.g. a square with varying side lengths with center at the center of the grid), shifting them to the center, and averaging them. The bigger our square/window/subset is, the more we mimic taking in more light from everywhere but more aligned light in the center.
|
|||
|
|
|
|