COMPSCI 194-26: Final Project (Assigned)

Fall 2021, Kevin Mo (kevmo@berkeley.edu), cs194-26-aaf

Introduction

The following two assigned projects are presented below for my final project in CS194-26. The first is in light field cameras, where light camera information is used to simulate aperture and depth refocusing during post-processing. The second project is image texture quilting, where pre-existing textures are sampled and tiled using smart synthesis algorithms for natural looking expanding textures.

Light Field Cameras

Depth Refocusing

In the renders below, we simulate depth refocusing by shifting each image according to their position data so that it centers towards the middle. An arbitrary parameter t, which ranges from 0 to 1, determines the degree to which each image is shifted before averaging, where 0 represents no shifting and 1 represents shifting all image positions to the center.

Render of averaging all images in the dataset (t=0)

Render of a “refocused” image, created by shifting images according
to positional data (t is around 0.5)

A GIF rendering of t values from 0 to 1, using step 0.05.

Aperture Adjustment

In addition, we can simulate adjusting camera aperture by selecting a subset of the images that we want to average according to their position in the light field camera. The method that I employed is selecting the center light field camera and choosing images based on the radial distance from the center point. With one image sample, we get a clear, unblurred picture of our scene which represents a smaller aperture. As we increase the radial distance, we increase the simulated aperture of our render and get a “blurrier” render as a result. Below are some of the results of this approach in action.

A GIF rendering of aperture simulation using radius (R) values from 0 to 60.

Summary

In the end, I learned a lot about the potential of light field cameras and their potential to be used in simulating various camera parameters (depth field focusing and aperture, for example) during post-processing, or after the fact that the scene has been taken. I only hope that more modern cameras incorporate this type of technology in the future!

Image Quilting

Randomly Sampled Texture

To start off our texture synthesis endeavor, we use the naive method of tiling random samples of the texture against each other, where a random patch of an original texture is taken and tiled to create an expanded texture. For textures where the features are continuous throughout, such as the brick texture, the random sampled texture works well enough, but seams are visible for many of the textures (for example, the text texture).

Randomly sampled textures (out = 560, patch = 80)

Overlapping Patches

Below, we utilize a smarter approach of overlaying similar patches next to each other such that the overlapping regions are similar. We use the sum of squared differences (SSD) to compute similarity between overlays, and a high similarity patch is randomly chosen during the tiling process. The overlay region is updated by the newly selected patch, and we continue on with this process until the expanded texture is fully tiled.

SSD of the full brick texture with a random brick patch,

including 1e3 padding to prevent OOB patches

Below are some of the results using this technique, where the seams are still present but slightly less visible in the text example. This technique also works fairly well in the brick texture.

Results using overlapping patch textures (out = 560, patch = 80, overlay = 20)

Seam Finding

The seam finding approach is an enhancement to our overlapping approach from before, where instead of replacing the full overlay with each new patch, we apply a merger between the low cost and high cost overlap region such that a low cost seam is found between the textures and used as a mask between the textures. The results are then combined to form a seamless texture with best merger results. The intermediate process images are shown below.

Source textures (with mask applied on left)

Mask result after determining top-bottom seam between overlaps

Masked version of the source textures

Masked version of each image and combined result

Below are the text and brick textures synthesized using an overlap-aware seam finding approach, where we can immediately observe fewer seams and a more natural looking render.

Texture Transfer

Below, we apply our previous approaches (up to seam finding), but with a modification that our SSD comparison algorithm uses a different texture source image to “transfer” from instead of the same resampled texture. The algorithm tiles and samples textures appropriately from the source image to try to transfer as much of the original texture into the source image as possible before applying other smart algorithms (overlaps, seam finding). The results can be seen below!