Project 2 - Fun with Filters and Frequencies

Gradient Magnitude

For a given image, we want to have a way to visualize its edges. We can do this by taking the gradient of the image at each pixel, and creating an image showing the magnitude of the gradient at each point. In theory, the gradient should be large at a pixel when there is a sharp change in color around that pixel (an edge), and small where the pixels change color gradually. To compute the gradient, we just have to take the partial derivatives in the x and y direction. We can accomplish this by using a partial derivative filter and applying it to the image. This can be done naively by using [1, -1] and [1, -1]^T, but it can also be improved by using the derivative of a gaussian filter. This allows us to achieve the effect of low-pass filtering an image while also computing its partial derivatives in the x and y direction. The results of this are shown here:

cameraman

cameraman_edges

To answer the required questions for this assignment, I have noted that blurring the image before taking partial derivatives helped to reduce noise in the edge detection. Moreover, blurring the image and then applying the naive derivative filters yielded the same result as applying the derivative of a gaussian as a filter.

Unsharp Masking

This part of the assignment tasked us with sharpening images by removing low frequencies. To do this, we can simply low pass filter the original image with a gaussian to get the low frequencies, and then subtract them off of the original image to get the high frequencies. Then, we add this back onto the original image to accentuate the high frequencies. Here are some results:

taj

sharpened_taj

link

sharpened_link

Hybrid Images

For this part, we seek to combine the low frequency parts of one image with the high frequency parts of another. Here we originally tried to merge the low frequency parts of a man with the high frequency parts of a cat. Here are results, along with corresponding Fourier plots (also note that the final output image is in color, but the FFT plots were done on a grayscale version of the hybridization procedure):

Originals:

cat

man

Fourier Derek:

original_man

Fourier Cat:

original_cat

Fourier Derek Lowpass:

original_man

Fourier Cat Highpass:

original_man

Fourier Ouptut Image:

original_man

Output Image:

man

I also attempted to hybridize my friend Ayda with a picture of my dog (the two most recent pictures in my camera roll). This hybridization did not work so well, as the distance between eyes did not match up very well. Here are the results:

Ayda:

ayda

Link:

link

Merged:

link-ayda

However, I also tried hybridizing some of my other friends with the image of nutmeg the cat. This yielded much better results. Here is one such hybridization:

eugene-cat

Multiresolution Blending

Finally, we were tasked with implementing multiresolution blending of images. Doing this involved an interesting algorithm where we have a mask that tells us what we want to take from each image we are blending. We then blur this mask into multiple levels of a gaussian stack, and then use this on various levels of laplacian pyramids of our blending images to construct a final images that blends the two initial images around the mask we passed in. A simple mask (such as taking the left half of one image and the right half of another) allowed us to recreate the legendary "oraple". Here is the process:

apple apple apple apple

orange orange orange orange

oraple oraple oraple oraple

I also updated the algorithm to do this in color, which yielded this result:

oraple

To wrap things up, I played around with a couple more masks. One mask I used was a vertical seam instead of horizontal, and I used this to combine two soda cans. Here are the results:

ginger_ale

To test out a more complicated mask, I tried to merge my friend Vasanth's childhood face into a relatively recent picture of myself. To do this, I made this mask:

mask

Here are the original images along with the final results:

matthew

vasanth

matthew-vasanth