Fun with Filters and Frequencies!

CS194-26: Image Manipulation and Computational Photography

Author: Sunny Shen

Background

Part 1: Fun with Filters

1.1 Finite Difference Operator

Edge detection is an important aspect of image processing. Edges usually can be found in places where color/brightness change drastically. To get partial derivatives of images, I convolved them with derivative D_x = [1, -1] and D_y = [[1], [-1]], the resulting partial derivatives show us how drastic the image is changing horizontally and vertically. To get a sense of how fast it's changing overall, I computed the gradient magnitude by taking the square root of the sum of the squared partial derivatives.

camera_man_dx camera_man_dy camera_man_mag

1.2: Derivative of Gaussian (DoG) Filter

That the result in 1.1 is pretty noisy and the edges aren't as clear as we would like it to be. Therefore, we can blur the images with a Gaussian filter first and then calculate the gradient magnitude to find edges. We can see that the edges are a lot more distinct now -- less noisy and thicker boundaries because of the blur.

camera_man_dx camera_man_dy camera_man_mag

Convolution has a really nice property that it's associative! Therefore, we can get the derivative of gaussian filter first and then convolve with the images, so that we only convolve once instead of twice. And we will get the same result.

camera_man_mag camera_man_mag

Note: There are some tiny differences in details because of rounding errors when we calculate the derivatives in two different ways.

Part 2: Fun with Frequencies!

2.1: Image "Sharpening"

In the last section, we see how Gaussian filter can blur images. Gaussian filters are low-pass filters that only retain low frequencies. If we subtract low frequencies from the original image we will get the high frequencies, we will get the "sharp" part of the image. Therefore, we can add the high frequency back to the original image to "sharpen" an image. There are two ways to implement this:

  1. Naively add the high frequencies back to the original images
  2. Unsharp Mask Filter: Because of the nice mathematical properties of convolutions, we can do the same thing by convolving the image with an unsharp mask filter, which is (1 + alpha) times unit impulse minus alpha times gaussian filter

taj taj_sharp

Some of the fav photos I took over the summer!

Half Dome

half_dome half_dome_sharp

Mt Tam

mt_tam mt_tam_sharp

Lassen Volcanic Park

lassen lassen_sharp

Blur an image and Sharpen it Again

mt_tam mt_tam_blurred mt_tam_blurred_sharpened

Observations: After blurring an image and re-sharpening it, it seems like the re-sharpened image, while sharper than the blurred image, is more pixelated than the original image, and it lost some details.

2.2: Hybrid Images

In SIGGRAPH 2006 paper by Oliva, Torralba, and Schyns, they presented some hybrid images that look different when viewed at different distances. We tend to see high frequencies at a closer distance, while the lower frequencies dominate our perception when it's further away. The idea is that we will blend one high frequency image with another low frequency image after aligning them together, and we can create this cool visual effect.

Nutmeg or Derek?

nutmeg DerekPicture derek_nugmet_hybrid

Here are a few creations of my own:

Harry Potter or Voldemort?

harry voldemort harry_voldemort

Harry_Voldemort_Fourier

Panda or Koala?

harry voldemort panda_koala_bw

Panda_Koala_Fourier

oski or Stanford tree? - failure case

In this case, oski is the high-frequency image and the Stanford tree is the low-frequency one. However, because the Stanfurd tree has very bright teeth with very distinct edges, it's very obvious and dominates the perception of oski's mouth even at a close distance.

oski stanford oski_stanfurd

Bells & Whistles - colored hybrid images

On Panda vs. Koala, I tried using color for both low and high-frequency components, and only either low or high-frequency components and compare the effects. I don't notice any significant differences, but it's probably because pandas are black and white and Koalas are mostly grey.

koala_panda_colored_comparisom

2.3: Gaussian and Laplacian Stacks

For Gaussian Stacks, we apply (increasingly blurrier) Gaussian filters to each level of the stack so we'll have a stack of low(er) frequency images. For Laplacian Stacks, we take the difference between any consecutive Gaussian layers to calculate Laplacian layers, and the last layer is the same as the last layer of the Gaussian stack so that when we sum up the Laplacian stacks, we get the original image!

Gaussian & Laplacian stacks for apple:

apple_stacks

Gaussian & Laplacian stacks for orange:

orange_stacks

2.4: Multiresolution Blending

To blend the apple and the orange together, we can create a mask that's half black and half white, and create a Gaussian stack of the mask to smooth out the transition between the two images.

Below is the masked Laplacian stacks for apple and orange, and adding them together will give us the oraple!

oraple_all_stacks

Adding the oraple laplacian stack (the last row) will give us the full blended oraple!

oraple_final

Bells & Whistles - Colored Oraple To get the colored blending, we would just repeat the steps above for each R, G, B channel. I noticed that the Laplacian layers would be pretty much completely black if I didn't adjust the values in R, G, B high-frequency layers when stacking them together, so I normalized the stacking of high-frequency RGB so that we can see the stacks better.

oraple_all_stacks_color

Again, we sum up the oraple Laplacian stacks and normalize the results to get the final oraple!

oraple_color_final

More Examples

Pear & Avocados

pear avocado avo_pear_color_final

Football & Almond

football almond football_almond_color_final

This one actually didn't work that well. Potentially because the white background of the football and the almond isn't exactly the same "white" so computationally it caused some differences in the brightness.

Half Dome @8pm & Moon @5am

Here I create a circular mask to put the moon at around sunrise time to Half dome at around sunset time

moon half_dome2 halfdome_moon

The most important thing I learned from this project!

It was very interesting to see the mathematical/computational fundamentals of image processing! As a photographer, I use Photoshop/Lightroom a lot but I had no idea how the software manages to do the editing of the photos. Now I kinda get a sense of basic blurring/sharpening and also blending images together. Writing code to edit photos instead of using a developed software was a very rewarding process for me :)