CS 194-26: Computational Photography, Fall 2020

Final Project: Lightfield Camera and Image Quilting

Michelle Fong, CS194-26-ael



Project 1: Lightfield Camera

Overview

In this project, I followed the paper "Light Field Photography with a Hand-held Plenoptic Camera" by Prof. Ren Ng in order to refocus images at different depths and adjust the aperture. The data used in this project comes from the Stanford Light Field Archive; I decided to use the "chess" dataset here. The various images taken over a slightly different angles allowed me to do selective averaging in order to achieve my desired effect.

Part 1: Depth Refocusing

The idea behind the depth refocusing part of the project is to take the average over all the images in the dataset, shifted varying amounts depending on the difference from the center image. I was able to use the numbers indicated in the image names to calculate the correct shift. I also played around with a multiplier on the shift, using it as a parameter into the depth focusing function. Multiplying the shifts by 0 resulted in an image that focused on the far side of the chess board which makes intuitive sense because that is the part of the image that moves the least from camera angle to camera angle.

"Chess" shift = 0
"Chess" shift = 1
"Chess" shift = 2
"Chess" shift = 3

Part 2: Aperture Adjustment

In order to readjust the aperture of the image, I employed a similar technique as above. However, instead of shifting and averaging each of the images in the set, I only selected the images that were within a specific radius of the focus image, which is the center image here. For a radius of 0, the image is only the center image and thus has no aperture widening or blurring effect. Everything is in pretty clear focus. The wider the radius, the bigger the aperture that we are mimicking and the more blurred out the surroundings will be.

"Chess" aperture radius = 0
"Chess" aperture radius = 1
"Chess" aperture radius = 3
"Chess" aperture radius = 5

Part 3: Summary

I learned from this project how simply and elegantly we are able to do post-production on images (given sufficiently well collected data) so as to alter the appearance and mimic different camera specifications. It gave me a greater appreciation for the potential of this particular area of computational photography.

Part 4: Bells and Whistles

I implemented the bells and whistles of using real data. To do this, I used my phone camera to take images of a dynamic scene, from varying angles (up / down, left / right), trying to mimic the angles of the Stanford chess dataset. I then loaded these images into my iPython notebook and ran the same code to refocus depth and adjust the aperture. However, as one may expect, the freehanded picture taking of a computer science major was not a great reference to do averaging upon. Because the differences between images varied so greatly and was so inconsistent, I achieved some rather poor results that I have included below.

Reference
Refocused with shift = 0
Aperture radius = 3

Project 2: Image Quilting

Overview

In this project, I followed the SIGGRAPH 2001 paper "Image Quilting for Texture Synthesis and Transfer" by Prof. Efros and Freeman in order to take sample images and apply them to create larger images.

Random, Overlapping, and Seam-Finding Methods for Quilting

The first step of this project was to fill an output image with randomly sampled patches of a texture. While this very simple method gives a fun result, there is much improvement to be had. The first improvement was the simple quilting method using overlapping on the right and bottom edges of each patch. This improves upon the previous algorithm by taking the SSD between the overlap of the current output image and each possible patch from our sample texture. By doing this, I was able to construct a list of patches whose SSD from the output template was under a specific threshold / tolerance level. So as to keep some randomness, I took a random sample from that list and layered it on top of the overlap, building the output image patch by patch. The final improvement was to implement seam finding. The concept again is simple here: so as to create a more blended image, instead of having a strictly linear seam between patches, we take the path of minimum cost. I implemented this using dynamic programming. The results from each of the three methods are below.

Random sampling
Quilting simple
Quilting with seam finding

Below is an illustration of seam finding. There are two patches that we are comparing, one from the current output image, and the randomly selected patch to overlap it with. I have also included the seam found by the minimum cost algorithm, which goes through the overlap, providing the path of the least costly and therefore most well blended seam. While this example is for the overlap on the right side of the patch, I was able to apply the same logic to the bottom side overlap by simply transposing of certain x, y coordinates.

Template patch
Sample patch
Seam overlayed on cost image

As explained above, the main idea behind image quilting texture synthesis is building up an output image by continually adding the least costly patch from the sample texture. The ways in which I did that is this project was defining some number of overlap pixels and finding the SSD between the output and texture template overlap. Seam finding helps decrease the harshness of the lines in the final images below.

Ice texture
Wood texture
Another ice texture
Text texture

Finally, I was able to apply the seam cut quilting method to apply a texture transfer. A texture transfer is basically applying some sample texture in the shape of some other image. The modifications needed to do this were to introduce some sort goal image and remove the tolerance level that I described earlier. We do not want randommness here, rather we want the texture patch that will most closely resemble the corresponding patch in our goal image. Therefore, we take the SSD between the goal image template and each texture patch to select the one to do the seam cut on and add to our output image. I have included some of my results below.

Feynman
White rice texture
Tranferred rice texture
Feynman
Ice texture
Tranferred ice texture

What I Learned

The most important thing I learned in this part was even though some technical ideas may appear simple, it requires a deep understanding to be able to conceptualize the implementation and be able to do it in a way that makes sense not only to oneself but also to an objective viewer who is trying to understand the beauty of such computational photography methods.

Bells and Whistles

I implemented a bells and whistles of writing my own cut function. Although I was able to use the MATLAB code as reference, in order to implement seam finding I wrote a Python version. My version involved dynamic programming so as to constantly build up the cost of any particular seam and taking the least costly option between the left, right, and middle pixel.