Final Project

Roshni Rawal
CS 194-26 Fall 2020
December 18, 2020

Project 1: Light Field Camera

This project uses sets of rectified images from The Stanford Light Field Archive to explore how averages of the images combined with shifting and scaling can create different focus points on the image at different depths and show aperture. Each set contains 17x17 images for a total of 289, best arranged in a grid. I used the chess set and the jellybean set in this project.

Part 1: Depth Refocusing

When we average all of the images without any shifting, we see an image that is blurry up close and sharp far away. To change the focus, we calculate the shift of the image with regards to the center image (which we take as the 8,8 image) and average all images shifted by different amounts. More precisely, given our grid of 17x17 images, for each image at coordinates x,y, we take the shift to be x_shifted = alpha*(x-8) and y_shifted = alpha*(y-8). Using this shift amount, we calculate our shift in the x and y direction for each image using np.roll and then average all images together. As we increase/decrease alpha the focus changes, as depicted in the gif below.

average of all chess pictures, no shifting

gif of shifts from alpha = -3 to alpha = 2 with a step of 0.5



gif of shifts from alpha = -5 to alpha = 0 with a step of 0.25



Part 2: Aperture Adjustment

For this part, we only want to average a certain number of images to change the aperture. So, once again using our 17x17 grid we calculate the average of our images with a radius, d of 0 to 7. For example radius 3 would take all images 3 away from the center image. This allows us to focus on one point while adjusting the aperture of the image. For this part we keep a constant alpha value. I set mine to 1.

d = 0

d = 7

gif from d going from 0-7 with the center of the board as the focus point



gif from d going from 0-7 with the front of the image as the focus point

Learnings

The gifs created by this project are so beautiful. I thought it was incredible how we can create this videolike effect using the rectified images and a combination of shifting and scaling. Amazing how simple transformations can create such interesting effects that mimic cameras!!

Project 2: Image Quilting

In this project we try to patch together parts of images in order to "quilt" the image together and make it look as much like the original texture as possible.

Part 1: Randomly Sampled Textures

We first tried to randomly sample patches and stitch them together left to right in a grid. As you can see in the results section, the result is not super convincing. There is a very visible seam between blocks and each block is unique.

Part 2: Overlapping Patches

For the overlapping patches part, our strategy becomes a little smarter. Instead of just randomly stitching patches together, starting from the top left corner of the image we choose a random patch, and then following that we 1) identify an overlap region between the first patch and multiple other candidate patches 2) compute the ssd of these overlap regions 3) choose a patch whose overlap is less than some tolerance (in my case I just chose the patch with the lowest ssd) 4) overlay these patches. I had to cover all cases. Overlap from the left, overlap from the top, and overlap from the left and top patches. These results are show noticeable improvement compared to the original method. I think this is particularly apparent in the text image in the results section. We go from having random words everywhere to words in lines. Although, the seams are less noticeable in this method, they are still quite noticeable. We will address this in the next section.

Part 3: Seam Finding

Although we have chosen patches with the lowest ssd, when we overlay them we see visible seams. To reduce the appearance of these seams we will use seam finding. Seam finding finds the minimum boundary cut between two overlapping patches. This minimum boundary cut is returned as a mask from the "cut" function provided to us. The mask is 0 at the places where an image should not show up and 1 at the places it should. We stitch our images together using this seam finding method, and the results are much, much improved. Seams are no longer visible, and for most of the results it is difficult to figure out how the seamed image is different from the original. Below is the result of seam finding on two patches.

Seam Finding Results

left patch

right patch

left overlap

right overlap



min boundary cut left

min boundary cut right

patch put together

patch in image



Results

My results are below. I am super impressed with the seam finding technique. It looks absolutely amazing and is able to recreate textures- a big difference from overlap patch. Especially in the blue rock and the alien brain images, I think the seam finding works exceptionally well. I have to look really closely at the images to find any imperfections.

Text

random

overlapping

seam finding



Bricks

random

overlapping

seam finding



Grapes

original

random

overlapping

seam finding




Blue Rock

original

random

overlapping

seam finding



Alien Brain

original

random

overlapping

seam finding



Learnings

This project was super interesting conceptually but took a lot of time to implement due to multiple bugs. I really enjoyed seeing my final results using the seam. To me, the coolest image was the text, because I thought it was so interesting that we could actually re create something that looked like something someone would read. I thought that the concept behind the overlapping patch as well as the seam finding was really interesting as well. It's amazing how much SSD can do.