CS 194-26: Image Manipulation and Computational Photography, Fall 2018

Project 5: Lightfield Camera

Joseph Reid, cs194-26-aes



Project Overview

Have you ever taken what seems like a great photo, only to realize later that the focus was off? While simply an annoyance for you, this is a big problem for people who take photos of events that occur very quickly, like sports photographers. Poor focus can ruin a perfect image, with no chance of taking another photo of the subject that just ended. One solution to this was the development of a lightfield camera, in part by Berkeley professor Ren Ng. A lightfield camera works by, rather than having just one lens like a traditional camera, having an array of microlenses that each capture an image of a single subject from slightly different positions. These slightly-offset images can then later be aligned according to different positions and averaged to get photos focused on different areas of the images.

In this project, we are given a 17x17 array of images of a single subject taken by a traditional camera (see image to the left) and try to mimic the capabilities of a lightfield camera. Because of the fact that points at farther depths move less relative to points at closer depths for a camera shift, aligning a point will put points at other depths out of focus.

Part 1: Depth Refocusing

Overview


    To put a certain point in focus, you would have to place each image so that the equivalent points on each image all lined up with one another. This is hard to figure out without data such as the depth of the subject when the photo was taken or image-recognition tools. Instead we can shift each image proportionally to its distance from one another to align (and therefore put in focus) different areas of the image. Defining the center image of the array as index 
(0, 0)
and the rest of the images having relative
x-
and
y-
indices in the range
[-8, 8]
, we can shift the images proportionally to the distance from the center image. In this way, I choose a multiplier,
c
, and shift each image by
c * (x,y)
, where
(x,y)
is the
x-
and
y-
indices of each image relative to the center image. Choosing different values of
c
(choosing values in the range
[5, -5]
gave good results) allows the user to put different areas of the image into focus.

Usage


    
python depth_refocusing.py [c] [r=8]

Results

Chess

c=3.5
c=3.0
c=2.5
c=2.0
c=1.5
c=1.0
c=0.5
c=0.0
c=-0.5
GIF of different depth-focuses

Lego Knights

c=-4.5
c=-4.0
c=-3.5
c=-3.0
c=-2.5
c=-2.0
c=-1.5
c=-1.0
c=-0.5
c=0.0
c=0.5
c=1.0
c=1.5
c=2.0
c=2.5
c=3.0
c=3.5
c=4.0
c=4.5
GIF of different depth-focuses

Part 2: Aperture Adjustment

Overview


    We can also simulate cameras with different aperture sizes by averaging different amounts of the images. Looking at the image below, it becomes evident that smaller apertures result in larger depths of field than larger apertures. Additionally, the more images we average together, and the farther apart the images are from one another, the blurrier the final image becomes. Therefore, by averaging only the center image (
r=0
), we get a deeper field of view, which mimics a small aperture. By averaging more images together (
r>0
), we get a blurrier image around the center, aka a shallower depth of field, which mimics a larger aperture.

Usage


    
python depth_refocusing.py [c=0] [r]

Results

Treasure Chest

r=0
r=1
r=2
r=3
r=4
r=5
r=6
r=7
r=8
GIF of different apertures