CS 194-26 Final Project

Deepshika Dhanasekar & Jared Rosner

Table of Contents

1. Image Quilting

2. Gradient Domain Fusion

3. Seam Carving

Image Quilting

The goal of this assignment is to implement the image quilting algorithm for texture synthesis and transfer. This project is based on the SIGGRAPH 2001 paper by Efros and Freeman. These techniques have multiple important applications including image stitching, image completion, image retargeting, and blending.

Texture Synthesis

The first part of this project consisted of implementing Texture Synthesis. Texture Synthesis is useful when the goal is to create a natural looking texture block from a smaller texture sample. The challenges of this are either harsh boundaries from simply pasting the sample image together, or blurriness in resizing the original sample texture. The paper suggested 3 different approaches to texture synthesis, each one performing significantly better than its predecessor:


  • Randomly Sampled Texture: this approach creates a blank image in the requested output size and fills it by randomly sampling patches from the sample texture image. While simple to implement, this approach does not have favorable results because the boundaries are very harsh and do not look natural at all.

  • Overlapping Patches: this approach attempts to pick neighboring patches that are similar on the edges to reduce the appearance of harsh boundaries. For each patch that needed to be filled, I sampled 10% of the pixels in the sample texture image. Then, I calculated the sum of squared distance (SSD) between an overlapping region with the patch on top, and the patch on the left. Then, I filled the output image with the patch that yielded the smallest SSD - in other words, the patch that had the most similar border to its neighboring patches.

  • Overlapping Patches with Seam Finding: this approach takes the basis of the overlapping patches approach. However, instead of completely replacing the overlapping region with the new patch, the algorithm finds the minimum cut path between the two patches in the overlapping region. This makes the border between the two patches seem much more organic because it is no longer a straight line.

Comparisons of Texture Synthesis Techniques


Original Texture Sample
Random Sampling
Overlapping Patches
Seam Finding
Overlapping Patches
Min-Cut Mask
Overlapping Patches with Seam

Original Texture Sample
Random Sampling
Overlapping Patches
Seam Finding
Overlapping Patches
Min-Cut Mask
Overlapping Patches with Seam

Original Texture Sample
Random Sampling
Overlapping Patches
Seam Finding
Overlapping Patches
Min-Cut Mask
Overlapping Patches with Seam

Original Texture Sample
Random Sampling
Overlapping Patches
Seam Finding
Overlapping Patches
Min-Cut Mask
Overlapping Patches with Seam

Original Texture Sample
Random Sampling
Overlapping Patches
Seam Finding
Overlapping Patches
Min-Cut Mask
Overlapping Patches with Seam

Texture Transfer

The second part of this project consists of transfering textures from sample images to fill new images. This is achieved by sampling patches that are the most similar to the corresponding patch in the target image, while still blending in with texture patches already chosen. To do this, I used a similar algorithm to the overlapping patches with seam finding, but I added an additional cost to the overlapping patch SSD, which is the SSD to the existing patch in the target image. By doing that, I achieved the following results.


Original Texture Sample
Original Image
Texture Transferred Image

Original Texture Sample
Original Image
Texture Transferred Image

Original Texture Sample
Original Image
Texture Transferred Image

This starry night example did not perform so well because the town had too many details to recreate using just one interation of texture transfer. I suspect that it would perform much better with multiple iterations or a neural network approach to style transfer.


Original Texture Sample
Original Image
Texture Transferred Image

The texture transfer for all the examples above does a good job of capturing general shape & shadow information, but does not do great with capturing actual details of the target image. The details captured improves as the patchsize decreases, but that exponentially increases runtime.

Bells and Whistles

Min Cut Function: I implemented my own version of the min-cut function in Python based on the algorithm defined in the SIGGRAPH 2001 paper by Efros and Freeman. Please refer to the function copied into the README.txt submitted with the code for this project.


Gradient Domain Fusion

The goal of this assignment is to explore gradient domain processing. By using Poisson Blending, I was able to blend objects from a source image into a target image without a harsh seam. It is a different approach than the Laplacian Blending implementing in an earlier project in the semester.

Poisson Blending

The idea behind Poisson Blending follows this equation:

The first summation tries to minimize the difference between gradients for pixels in the source image within the mask. The second summation minimizes the difference between gradients for the pixels on the edge of the mask. We want to solve this equation for all the v pixels that minimize this equation. Let v(x, y) be the RGB value that will replace pixel(x, y) in the target image. We can solve for v(x, y) for all x and y using Least Squares. I followed this procedure:



Once we had all these equations, we converted them into matrix-vector form, with all the v(x,y) variables stored in a vector. We used a sparse matrix formulation to speed up computation and efficiently store the data. Then, we used least squares to solve the matrix-vector equation for all the v(x, y) variables. These values directly replaced the corresponding values in the target image to produce the poisson blended image.

Toy Reconstruct

To test out the poisson blending algorithm, we tried to first reconstruct an image using poisson blending. In this case, we did not have an explicit mask. Instead, there is no target image because the mask contains the entire source image. Here are the results of the reconstruction:

Original Image
Reconstructed Image

Poisson Blending Examples

Here are some examples of actually blending an object from one source image into another target image.


Example 1: Penguin & Hikers (Sample Image):


Source Image
Mask
Target Image
Blended Image

Example 2: Lion & Tree:


Source Image
Mask
Target Image
Blended Image

Example 3: Moon & Night Sky:


Source Image
Mask
Target Image
Blended Image

Example 4: Floatie & Pool:


Source Image 1
Source Image 2
Target Image
Blended Image

Example 5: Jellyfish & Sky:


Source Image 1
Mask
Target Image
Blended Image
Source Image 1
Mask
Target Image
Blended Image

Most of the examples shown here have similar backgrounds between the source and the target images. However, you can see some failure cases where the target image may have a similar color background, but a different texture leading to some obvious blurred boundaries. The colors mesh well, but because the textures are different, there are some artificats. This is fixed with the implementation of mixed gradients in the Bells & Whistles below.

Bells and Whistles

Mixed Gradients: To improve some of the blending and incorporate elements that are in the background of the target image, but not in the background of the source image, we implemented the mixed gradients approach. For this approach, instead of having the "b" value in the Least Squares matrix always be the value of the gradient of the source image, we picked it to be either the gradient of the source image or the gradient of the target image, depending on which one yieled the highest absolute value gradient difference. By doing this, we achieved much better results as shown below:


Example 1: Greek Letters and Bricks:


Original Poisson Blending
Mixed Gradients Poisson Blending

Example 2: Jellyfish and Sky:


Original Poisson Blending
Mixed Gradients Poisson Blending

Improved Color2Gray: One of the issues with converting to grayscale is the loss of contrast information. Using Poisson Blending, we can create a new Color2Gray function that preserves contrast information, essentially using the algorithm for the toy reconstruction problem. Here are the results of this function:


Original Image
Normal Grayscale
Poisson Blending Grayscale

Seam Carving

The goal of this assignment was to implement the basic seam carving algorithm presented in Seam Carving for Content-Aware Image Resizing. We were tasked with using this algorithm to design a program which can shrink an image (either horizontally or vertically) to a given dimension. The greedy algorithm can be summarized by two main steps:

1. Calculate the importance of each pixel in an image, creating an energy map.
2. Identify and remove the minimum energy seam (continuous path of pixels from one edge of the image to the opposite edge) from the image across the specified dimension. Repeat until the image has reached the desired dimension.

The best seam was identified using a dynamic programming algorithm, described below (switch up W and H for vertical vs horizontal seams):

for j in range(1, W): table[0,j] = 'val': energy_map[0,j]
for i in range(1, H): for j in range(1, W): table[i,j] = min(table[i-1, j-1], table[i-1, j], table[i-1, j+1])

The most important learning from this project was actually applying dynamic programming to calculate the best seam.

Seam Carving in Action

In this section, we determined the 'importance' each pixel has using an energy function. The energy function we chose was gradient magnitude, which sums the absolute value of the gradient in each cardinal direction, x and y. E(I) = |d/dx(I)| + |d/dy(I)|.


Identify the minimum energy seam:


Original Image
Energy Map
Lowest Energy Seam on Map (White)
Lowest Energy Seam on Image (Black)

Remove minimum energy seams until picture assumes the desired dimensions:


Original Image
Carved Horizontally

More Examples


Fridge:


Original Image
Carved Vertically
Carved Horizontally

Pavement:


Original Image
Carved Vertically

Mosaic:


Original Image
Carved Vertically

Mountains:


Original Image
Carved Vertically

Pork:


Original Image
Carved Vertically

Failure Cases

Images that do not turn out well tend to be those where the subject dominates the photo with a homogenous background or those where there is no one subject but rather an assembly of subjects.


One Subject:


Original Image
Carved Vertically (notice the face)

Original Image
Carved Horizontally (notice the pompom doggo face)

Assembly of Subjects:


Original Image
Carved Horizontally (notice the merged "r" and "y")

Original Image
Carved Vertically (notice the left puppies)