The main idea for this project is implementing the Image Quilting algorithm introduced in paper
Image Quilting for Texture Synthesis and Transfer
by Professor Efros and Freeman. The Image Quilting algorithm is used for Texture Synthesis and Texture Transfer.
For Texture Synthesis, like what the paper does, I will compare the results of 3 different stitching strategies:
1. Randomly select patches from the sample to stitch;
2. Select patches with min Sum of Squared Difference between partically overlapped region;
3. On the base of (2), stitch 2 patches together through min-SSD cost path.
Although strategy 3 is not complicated, it makes a great improvement on the stitching result, and we will use this strategy for Texture Transfer.
For Texture Transfer, I will try to apply different painting styles (Van Gogh, Picasso, Xu Beihong) to one painting (Japanese traditional painting). It's important to choose parameters, including the portions patch-patch stitching SSD and patch-shapeImage matching SSD take respectively in total SSD, tolerance SSD range for randomly choosing a patch, and most important, the patch size and overlap size for calculating patch-patch stitching SSD. I will compare the results of different patch sizes.
To see the different between 3 stitching strategies, I choose some sample textures as examples. The sample textures all have size roughly 200*133 pixels, and the patch size is 50 pixels. The output image contains 5*5 patches and have size 250*250 pixels. The implementation of 3 strategies are as below.
1. Randomly select patches from the sample to stitch
For every patch in output, randomly select a sub image (patch) with size 50*50 from sample.
In this way, you will see some obvious color/shape discontinuity.
2.
Select patches with min Sum of Squared Difference between partically overlapped region
Randomly select a patch (patch1) from central part of the sample (with offset = overlap length for every edge) for upper left corner in output. To select the patch on right of it, compute the Sum of Square Difference (SSD) between overlapped region of patch1 and every patch (column 40-60 part of left patch and column 0-20 part of right patch) in central part of the sample respectively and find min SSD. Then randomly choose a patch with SSD <= (1+tolerance_coef) * min_SSD. Similarly, choose patches for the whole first row of patches in output, and then choose patches for the whole first column by computing SSD for overlapped region on bottom of top patch and on top of bottom patch. Finally choose patches for rest patches in output from left to right and from top to bottom by computing sum of SSD for top/bottom overlapped region and SSD for left/right overlapped region.
This strategy will make sure the nerighboring patches will have similar color, but there will still be clear edges between patches.
3.
On the base of (2), stitch 2 patches together through min-SSD cost path
Instead of stitching patches with size 50*50, we stitch the sides based on min-cost cut path in overlapped regions. The min-cost cut path in a overlapped region is a continuous path with minimum SSD on a SSD map of the overlapped region.
In this way, we can elminate clear edges between patches in strategy 2.
Left patch to stitch.
Size: 50 *(50 (patch size) + 10 (extended by each side for overlapping)) pixels
Right patch to stitch.
Area between left edge and rightmost dotted line: overlapped for calculating SSD and choose patch to stitch and error patch for min-cost cut path.
Left: Error Patch
Right: Errpr Patch with min-cost cut path
Final stitched image
Below are some examples applying strategy 1, 2 and 3.
For texture transfer, we need another target image which we want to transfer sample texture to. To apply texture transfer, I did some improvement on stretegy 3. Instead of choosing patches just based on SSD between neighboring patches, I added another error which is SSD between the patch and the subimage which has the same corresponding position on target image as the patch on output image, so the patch we choose will mimic the shape of target image. The weight of each error is determined by a parameter alpha: total error for a patch = (1-alpha) * patch-patch stitching SSD + alpha * patch-targetPatch matching SSD.
Therefore, there are 3 parameters in total that I need to choose:
1. Alpha for relative importance of patch-patch stitching SSD and patch-targetPatch matching SSD.
2. Tolerance_coef to determine the range of total error of patches that we can randomly choose from.
3. Patch size in pixel.
4. Overlap length in pixel (for 1 patch. total overlap area will have length 2*overlap_length).
After some experiment, I fixed alpha = 0.1, tolerance_coef=0.05, and overlap_length = patch_size/5. For patch size, I compared results for several options. Below are some examples of transfering a Japanese traditional painting into different artist styles (and a texture looked like rice).
Target image
300 * 202 (pixel, col * row)
Target image
300 * 399 (pixel)
As we can see, there is a tradeoff: smaller the patch size, the more close the output will look like to target image, but it will also lose the characteristics of the artist's style in sample.
We can make result better using image quilting together with blending (e.g. feathering or Laplacian pyramid). In this example of using toast texture to produce a face, we can blend the output face together with original toast sample using Laplacian pyramid to produce a smooth face-in-toast image.
Sample: toast
67 * 63
Target image
100 * 100
Texture transfer result
patch size: 5
To put the result back on to the toast, I resized the toast to 200 * 183 and texture transfer result to a size fit into the toast. Below are results of puting the texture transfer reault directly on the toast (left )and using Laplacian pyramid with 2 layers to blend them together (right).
Puting the texture transfer reault directly on the toast.
Blended using Laplacian pyramid.
Below are Laplacian pyramid elements. Result is computated by:
im1_layer1 = Laplacian_filter(image1, sigma=1)
im2_layer1 = Laplacian_filter(image2, sigma=1)
filter_layer1 = Gaussian_filter(image3, sigma = 1)
im1_layer2 = im1 - im1_layer1
im2_layer2 = im2 - im2_layer1
filter_layer2 = Gaussian_filter(image3, sigma = 2)
result = im1_layer1*filter_layer1 + im2_layer1*(1-filter_layer1)
+ im1_layer2*filter_layer2 + im2_layer2*(1-filter_layer2)
Image1: Image to blend in.
Image 2: Background image.
Image 3: Filter.
created with
Website Builder Software .