CS 194-26 Final Project 1: Lightfield Camera

Angela Xu
Fall 2021

Overview

This project uses Stanford's Light Field Archive photos to produce depth refocusing and aperture adjustment effects! It is based on Ren Ng's paper on Lightfield Photography. Each set of photos consists of 289 images from a 17x17 grid. I chose to work with two sets of images: jellybeans and the bunny. Images are taken from different positions over a plane orthagonal to the optical axis.

Here are individual images of each set, as well as the average of each set.


Part 1: Depth Refocusing

The initial focus of the averaged image is based upon the position of the camera relative to the objects. When the camera is moving around the grid, further objects should stay relatively stationary in frame, as opposed to closer objects. This will cause some objects to blur when averaged, creating an effect of depth.

To simulate depth refocusing, we shift the images based upon the position they were taken from relative to the center of the grid. I get this by subtracting each image position from the center position, using the (u,v) coordinates provided in their filenames. Then, I multiply an alpha value provided as input, which will indicate whether the focus moves further or closer (depending on if it is positive or negative) and by how much. An alpha value of 0 indicates the original positions.

Here are the jellybeans being refocused with different alpha values! As the alpha value increases, the focus moves backwards.

              a = 0.0                               a = 0.05                               a = 0.1                               a = 0.15

              a = 0.2                               a = 0.25                               a = 0.3                               a = 0.35

              a = 0.4                               a = 0.45                               a = 0.5                               a = 0.55


Here are the bunnies. Unlike the jellybeans, the focus starts at the center rather than the front, so I want to refocus both forwards and backwards (both negative and positive alpha values).

            a = -0.2                       a = -0.15                          a = -0.1                       a = -0.05                       a = 0.0

            a = 0.05                        a = 0.1                          a = 0.15                        a = 0.2                        a = 0.25


Part 2: Aperture Adjustment

For aperture adjustment, we can choose a radius size of how many photos around the center of the image grid will be averaged. When we take in a larger radius, more images will be averaged, simulating a blurrier background and larger aperture (and vice versa). Here are various radius sizes and resulting images!

                     r = 1                                                r = 3                                  r = 1                     r = 3

                     r = 5                                                r = 7                                  r = 5                     r = 7


What I learned

This was a super cool project! Before this class, I'd never heard of light field cameras, and thought refocusing and aperture adjustment could only be done while capturing photos, but not afterwards. Implementing them through averaging multiple images was very interesting.


Final Project 2: Image Quilting

Overview

For this project, I used three of the provided textures and two of my own to do some image quilting! Below are bricks, text, clouds, flowers, and fur.


Part 1: Randomly Sampled Texture

The simplest way to do image quilting was to choose random patches and fill a specified size. For these images, I set a patch size of 50px for the rest of the images and 20px for clouds to fill an image size of 300px (since cloud image dimensions are smaller).

As we can observe, there are pretty distinct discontinuities and it is obvious where each patch's borders are.

Part 2: Overlapping Patches

An improvement to randomly sampling patches would be to overlap small areas on each patch, calculating the minimum SSD between possible overlaps to give us the best patch candidate. Depending on how many pieces are being overlapped, we may have to calculate both horizontal and vertical overlapping areas, and account for a double-counted intersection. I used an overlay of 10px for a patch size of 50px, and 5px overlay for a patch size of 20px to fill an image of 290px.

It's slightly better! The brick and fur look fairly realistic. There is some misalignment in the text and flower texture and the cloud texture is still quite boxy, but still an improvement.

Part 3: Seam Finding

Now we can use the same overlapping technique, but carve out seams that will provide a better blend! Using the cut function, we can find more optimal sections to overlap rather than overlapping entire blocks of each patch.

Woohoo!!! Even better. The fur ended up a bit weird though.
I couldn't figure out the texture transfer....but it is okay, still learned a lot implementing image quilting. I enjoyed this class a lot!!! Super cool content :) Happy holidays :D <3 !!