Part 1: Frequency Domain Part 1.1: Warm-up
As introduced in class, to sharpen an image, we do: original + α(original - blurred).
For the blurry image, we use a gaussian filter, and to compute the
high frequency image, we take the difference of the original and blurred image times a scalar.
Below are the images:
original image
blurred image
high frequency image (difference between blurred and original image)
sharpened image
Part 1.2: Hybrid Images
To create the hybrid image we apply a highpass filter on one image and a lowpass filter on the
second image. The we compute the mean of both. As a result, the image that had the highpass filter applied is what we see
in close proximity to the image, while further away, we see the lowpass filter image. Below are various image results:
Hybrid Image 1: Derek and Nutmeg
image 1
image 2
image 1 fft
image 2 fft
image 1 highpass filter applied
image 1 highpass filter fft
image 2 lowpass filter applied
image 2 lowpass filter fft
image 1 and image 2 hybrid
[Sigma1 Sigma2] = [4 3.5]
image 1 and image 2 hybrid fft
Hybrid Image 2: Monkey and Tiger using color enhancement (Bells & Whistles)
image 1
image 2
image 1 fft
image 2 fft
image 1 highpass filter applied
image 1 highpass filter fft
image 2 lowpass filter applied
image 2 lowpass filter fft
image 1 and image 2 hybrid
[Sigma1 Sigma2] = [5 9]
image 1 and image 2 hybrid fft
image 1 and image 2 hybrid colored
[Sigma1 Sigma2] = [5 9]
Hybrid Image 3: Michelle Obama and Hillary Clinton using color enhancement (Bells & Whistles)
image 1
image 2
image 1 fft
image 2 fft
image 1 highpass filter applied
image 1 highpass filter fft
image 2 lowpass filter applied
image 2 lowpass filter fft
image 1 and image 2 hybrid
[Sigma1 Sigma2] = [2 7]
image 1 and image 2 hybrid fft
image 1 and image 2 hybrid colored
[Sigma1 Sigma2] = [2 7]
Image 3 is hard to see the high pass filter of Hillary Clinton as the
lowpass filter of Michelle Obama dominates. This is perhaps due to the unmatched lighting of the
images, where the image of Hillary Clinton is significantly brighter compared to the image
of Michelle Obama. The use of color makes it slightly easier to visualize the lowpass and highpass filters,
as we can now more clearly identify the edge boundaries of both the different images. That said,
the images should also be similar in color lighting, as even with the colored hybrid, Image 3 is
difficult to make out with respect to the highpass filter.
Part 1.3: Gaussian and Laplacian Stacks
This part depicts the visualization of the Gaussian and Laplacian stacks which were created. These stacks are similar to pyramids but without the downsampling. The Gaussian stacks were created through repeated image
blurring within each stack layer. Consequently, the Laplacian stacks were created by using these blurred layer
differences for each corresponding Laplacian stack layer. We lastly recreate the original image by adding the
last Laplacian stack and Gaussian stack images together. As we progress to successive layers of the Gaussian
and Laplacian stacks, it can be observed that the image layers progress in the order of highpass filter features
above to lowpass filter features observed. This can be visualized in the images below.
Image 1: Salvador Dali's Painting
original image
Gaussian Stacks
Stack Level 1
Stack Level 2
Stack Level 3
Stack Level 4
Stack Level 5
Stack Level 6
Laplacian Stacks
Stack Level 1
Stack Level 2
Stack Level 3
Stack Level 4
Stack Level 5
Stack Level 6
Image 2: Derek and Nutmeg Hybrid Image
original image
Gaussian Stacks
Stack Level 1
Stack Level 2
Stack Level 3
Stack Level 4
Stack Level 5
Stack Level 6
Laplacian Stacks
Stack Level 1
Stack Level 2
Stack Level 3
Stack Level 4
Stack Level 5
Stack Level 6
Part 1.4: Multiresolution Blending
The goal of this part of the assignment is to blend two images seamlessly using a multi resolution blending as described in the 1983 paper by Burt and Adelson. An image spline is a smooth seam joining two image together by gently distorting them. Multiresolution blending computes a gentle seam between the two images seperately at each band of image frequencies, resulting in a much smoother seam. This method uses a mask as is proposed in the algorithm on page 230, and creates a Gaussian stack for the mask image and for the two input images. The Gaussian blurring of the mask in the pyramid will smooth out the transition between the two images. For the vertical/ horizontal seam, the mask is a step function of the same size as the original images. Furthermore, by adding together both Gaussian and Laplacian stacks, we can recreate the original image smoothly.
Image 1: Orange + Apple
original image 1
original image 2
mask
multiresolution blended image
Image 2: Faces
original image 1
original image 2
mask
multiresolution blended image
Laplacian Stack for Image 2: Faces
Stack 1
Stack 2
Stack 3
Stack 4
Stack 5
Stack 6
Stack 1
Stack 2
Stack 3
Stack 4
Stack 5
Stack 6
Stack 1
Stack 2
Stack 3
Stack 4
Stack 5
Stack 6
Part 2: Gradient Domain Fushion
Using poisson blending, we apply the gradient-domain processing technique. This method helps to
achieve the primary goal of seamlessly blending an object or texture from a source image into a target image.
To solve this problem, we can formulate our objective as a least squares problem. Given the pixel intensities of the source image "s" and of the target image "t", we want to solve for new intensity values "v" within the source region "S":
Here, each "i" is a pixel in the source region "S", and each "j" is a 4-neighbor of "i". Each summation guides the gradient values to match those of the source region. In the first summation, the gradient is over two variable pixels; in the second, one pixel is variable and one is in the fixed target region. The general idea of such
blending techniques is to create an image by solving for specified pixel intensities and gradients.
Solving the least squares optimization problem gives us the pixel values that will seemlessly place the user chosen parts of the source image in the area of the target image that the user chooses.
Part 2.1: Toy Problem
original image
reconstructed image
Part 2.2: Poisson Blending
Poisson blending is responsible for minimizing pixel gradient differences of the pixels in proximity of
the region within the target area where we want to place the part of the source image. Additionally,
it also minimizes gradient differences within pixels inside and outside the target image masks and within the same mask. This enables for preservation of the original features/characteristics of the target and source images once we blend them, in addition to making the blend a smooth transition.
This as described before can be formulated by the minimization problem:
where the outer summation minimizes gradient differences within pixels in the mask, while the inner summation minimizes gradient differences for pixels on the boundaries of the mask.
Example Images:
image 1
image 2
blended image
image 1
image 2
blended image
image 1
image 2
blended image
Part 2.2 (Bells & Whistles): Mixed-Gradient Blending
Mixed-gradient blending is essentially the same as Poisson blending, but the gradient in source or target with the larger magnitude is used as the guide, rather than the source gradient. It is described by the equation below:
Fireworks Example Image Revisited:
Mixed Gradient Blending (Color)
Mixed Gradient Blending (Grayscale)
Poisson Blended Image (Color)
Poisson Blended Image (Grayscale)
Comparatively, for this particular picture, Poisson blending is more smooth than mixed gradient blending, since
in mixed gradient blending, the black masked outline of the fireworks are prominently visible. However, at the
same time, the fireworks are more faded away in Poisson blending that in mixed gradient blending. Changing from
color to grayscale helps the mixed gradient blending more smoothly integrate the fireworks to the landscape,
but even so, the grayscale version of the poisson blending is far smoother. Now, to compare these images to the
Laplacian stacks:
Stack 1
Stack 2
Stack 3
Stack 4
Stack 5
Stack 6
Comparing Poisson, Mixed-Gradient, and Laplacian blending, we can see that in this image, Poisson blending seems
to produce the best results.
What I Learned: Image blending techniques perform differently for different inputs. As a result, there is no
completely "generalizable" answer as to when which technique works better than the other. Rather, one
needs to try different approaches to determine which image qualitatively looks better. I also had to repeatedly
experiment with the values of sigma for the hybrid images to determine which alginment looked most discernable.
Indeed, there is a large artistic component to computational photography!