Project 2: Fun with Filters and Frequencies!

CS 194-26

By Won Ryu

Part 1

1.1

Through convolutions we are able to obtain a gradient magnitude computation. We first obtain the gradient by obtaining the partial derivative with respect to x and partial derivative with respect to y which are the two components of the gradient in the 2d image. The partial derivative with respect to x is obtained by convolving the image with filter [[1, -1]] as the partial derivative with respect to x can be approximated by image(x+1, y) - image(x, y). Similarly, the partial derivative with respect to y is obtained by convolving the image with filter [[1], [-1]] as the partial derivative with respect to y can be approximated by image(x, y+1) - image(x, y). With the partial derivatives computed, we have the gradient of the image which is a 2 dimensional vector. The gradient magnitude can now be computed as it is the magnitude of a 2 dimensional vector which is sqrt(partial_wrt_x^2 + partial_wrt_y^2). With the gradient magnitudes computed, we can also use a threshold and binarize all coordinated with gradient magnitudes that are above or equal to the threshold as 1 and those that aren’t as 0.

part1.1partial_wrt_x

partial derivatives with respect to x

part1.1partial_wrt_y

partial derivatives with respect to y

part1.1gradient_magnitude

gradient magnitude of image

Part1.1edge_im.jpg

binarized edges where if the magnitude was 0.185 or greater, it was binarized as edge

1.2

To reduce noise, we’ll first smoothen the picture by convolving the original image with gaussian and then following the procedure in part 1.1.

Part1.2edge_im_two_filters.jpg

Now, we see differences of noise being significantly reduced as we no longer see the grainy grass that was causing salt and pepper noise and also the edges of the human and the camera is a lot more continuous and smoother. Another difference was that now the threshold in which we would classify as an edge needed to be lowered to classify edges.

For efficiency, we’ll take advantage of the associative property of convolutions and create partial derivative of gaussian with respect to x and y that we can use as filters. In this way, we have one filter instead of two.

part1.2gaussian_partial_wrt_x.jpg

part1.2gaussian_partial_wrt_y.jpg

part1.2edge_im.jpg

The edge image ends up becoming the same results as before as it follows the mathematical associativity property.

1.3

Now we will do image straightening by maximizing the number of horizontal and vertical edges. We’ll rotate the image in a range of angles and keep the angle that maximizes the number of edges that are either horizontal or vertical.

Facade - Unstraightened

part1.2gaussian_partial_wrt_x.jpg

part1.2gaussian_partial_wrt_y.jpg

Straighten

part1.2gaussian_partial_wrt_x.jpg

part1.2gaussian_partial_wrt_y.jpg

Optimal degree rotation: -3

Sailing - Unstraightened

part1.2gaussian_partial_wrt_x.jpg

part1.2gaussian_partial_wrt_y.jpg

Straighten

part1.2gaussian_partial_wrt_x.jpg

part1.2gaussian_partial_wrt_y.jpg

Optimal degree rotation: 13

Barn - Unstraightened

part1.2gaussian_partial_wrt_x.jpg

part1.2gaussian_partial_wrt_y.jpg

Straighten

part1.2gaussian_partial_wrt_x.jpg

part1.2gaussian_partial_wrt_y.jpg

Optimal degree rotation: -15

Failure case - Flatiron

As seen in the straightened image, cars are driving at an angle in New York City which is a flat city which implies the straightening failed as the algorithm tried to align the building as vertical as possible as those had the most edges.

Unstraightened

part1.2gaussian_partial_wrt_x.jpg

part1.2gaussian_partial_wrt_y.jpg

Straighten

part1.2gaussian_partial_wrt_x.jpg

part1.2gaussian_partial_wrt_y.jpg

Part 2

2.1

These images were sharpened by having their high frequencies added more proportional by an alpha value. The high frequencies of the images were found by taking the original image and subtracting the low frequencies of the image.

Alpha: 0.5

unsharpened:

part1.2gaussian_partial_wrt_y.jpg

sharpened:

part1.2gaussian_partial_wrt_y.jpg

unsharpened:

part1.2gaussian_partial_wrt_y.jpg

sharpened:

part1.2gaussian_partial_wrt_y.jpg

For evaluation of this sharpening method, we blurred an image and then resharpened it using this method. As visible, this sharpening method merely makes the image appear sharper but doesn’t add the high frequencies we lost when blurring and therefore can not be as sharp as the original image.

Original image:

part1.2gaussian_partial_wrt_y.jpg

Blurred out:

part1.2gaussian_partial_wrt_y.jpg

Resharpened:

part1.2gaussian_partial_wrt_y.jpg

2.2

We can also create hybrid images by aligning two images, getting the high frequencies of one image, the low frequencies of the other, then adding those two images together. Then from a far we see the image with the low frequency and up close we see the image with the high frequency.

Watermelon + Cantaloupe

Image for low frequency: part1.2gaussian_partial_wrt_y.jpg

Image for high frequency: part1.2gaussian_partial_wrt_y.jpg

Hybrid: part1.2gaussian_partial_wrt_y.jpg

Frequency analysis of cantaloupe + watermelon:

Log magnitude of the Fourier transformation of original cantaloupe part1.2gaussian_partial_wrt_y.jpg

Log magnitude of the Fourier transformation of original watermelon part1.2gaussian_partial_wrt_y.jpg

Log magnitude of the Fourier transformation of high frequency of cantaloupe part1.2gaussian_partial_wrt_y.jpg

Log magnitude of the Fourier transformation of low frequency of watermelon part1.2gaussian_partial_wrt_y.jpg

Log magnitude of the Fourier transformation of hybrid image part1.2gaussian_partial_wrt_y.jpg

2014 World Cup ball + 2018 World Cup ball

Image for low frequency: part1.2gaussian_partial_wrt_y.jpg

Image for high frequency: part1.2gaussian_partial_wrt_y.jpg

Hybrid: part1.2gaussian_partial_wrt_y.jpg

Failure case: Ronaldo + Messi

As seen in this failure case, the hybrid images only work when the images are able to be well aligned. In these two separate images, Messi and Ronaldo are in different poses which makes hybrid images with pictures not as effective.

Image for low frequency: part1.2gaussian_partial_wrt_y.jpg

Image for high frequency: part1.2gaussian_partial_wrt_y.jpg

Hybrid: part1.2gaussian_partial_wrt_y.jpg

2.3

In this part, a Gaussian and Laplacian stack was implemented. The gaussian stack was implemented by starting with the original image as the first layer, each layer was found by convolving the previous layer with a gaussian. The laplacian stack was implemented where each layer of the laplacian stack was the difference between the same layer at the gaussian and the layer after that. The last layer of the laplacian was just the last layer of the gaussian.

Lincoln and Gala

part1.2gaussian_partial_wrt_y.jpg

Gaussian stack: part1.2gaussian_partial_wrt_y.jpg

Laplacian stack: part1.2gaussian_partial_wrt_y.jpg

Cantaloupe and Watermelon Hybrid:

part1.2gaussian_partial_wrt_y.jpg

Gaussian stack: part1.2gaussian_partial_wrt_y.jpg

Laplacian stack: part1.2gaussian_partial_wrt_y.jpg

2.4

Multiresolution Blending

Two images were blended seamlessly using multiresolution blending. The images were blended similarly to Burt and Adelson’s approach in 1983.

  1. The Laplacian stacks for image1 and image2 were calculated. We refer to them as L1 and L2.
  2. A mask that has the boundary where the blending occurs was created and a gaussian stack on the mask was calculated. We refer to them as GR
  3. For each level l of the stacks combine to form a new stack LS where, LS(l) = L1(l)GR(l) + L2(l)(1-GR(l))
  4. Sum up all the layers of LS to obtain our multiresolution image.

Sea and Piano

Image 1: part1.2gaussian_partial_wrt_y.jpg

Image 2: part1.2gaussian_partial_wrt_y.jpg

Mask: part1.2gaussian_partial_wrt_y.jpg

Blended: part1.2gaussian_partial_wrt_y.jpg

Laplacian stack of sea: part1.2gaussian_partial_wrt_y.jpg

Laplacian stack of piano: part1.2gaussian_partial_wrt_y.jpg

Gaussian Stack of mask: part1.2gaussian_partial_wrt_y.jpg

Masked laplacian of sea: part1.2gaussian_partial_wrt_y.jpg

Masked laplacian of piano: part1.2gaussian_partial_wrt_y.jpg

Merged: part1.2gaussian_partial_wrt_y.jpg

Flags

Image 1: part1.2gaussian_partial_wrt_y.jpg

Image 2: part1.2gaussian_partial_wrt_y.jpg

Mask: part1.2gaussian_partial_wrt_y.jpg

Blended: part1.2gaussian_partial_wrt_y.jpg

Bowling with Earth

Image 1: part1.2gaussian_partial_wrt_y.jpg

Image 2: part1.2gaussian_partial_wrt_y.jpg

Mask: part1.2gaussian_partial_wrt_y.jpg

Blended: part1.2gaussian_partial_wrt_y.jpg

Bells & Whistles

I also implemented the multiresolution blending on color by performing the algorithm explained in 2.4 on the 3 color channels separately and storing them separately. Then the result of the blendings on the 3 colors were stacked on top of each other in RGB order.

part1.2gaussian_partial_wrt_y.jpg

Learning

I learned about the capabilities of filters and convolutions where they can perform everything from calculus, edge detection, smoothening, sharpening, and with those images we can create exciting hybrid images and blended images.