Fun with Filters and Frequencies

By Hyun Jae Moon

Introduction

The goal of this project is to modify the filters and frequencies of image to produce blended/hybrid/sharper images.

Part 1: Fun with Filters

Part 1.1: Finite Difference Operator

To extract the edge image, I first convolved the image with dx and dy. Here is the convolved image with dx and dy. This is done by the convolve2d function from scipy.signal.

Gx

gx.png: image convolution with dx

Gy

gy.png: image convolution with dy

Then I computed the gradient magnitude by np.sqrt(np.sqaure(gx)+np.square(gy)). Finally, I would turn it into an edge image by filtering pixel values that are less than the threshold.

Mag

mag.png: Gradient Magnitude

Edge

edge.png: Edge Image with threshold=0.1

Part 1.2: Derivative of Gaussian (DoG) Filter

By utilizing the Gaussian filter, a.k.a. low-pass filter, it would suppress the noise and provide a much smoother edge. From gx and gy in the previous part, we would convolve each with G, a 2D gaussian Kernel. Then, we would compute the magnitude and edge image as we did previously.

Gauss_edge

gauss_edge.png

As you can see, compared to the previous edge.png, the bottom portion of the image has much less noise, and the edges around the person and the camera is much more vivid and clear. It is visually verifiable that the noise has certainly diminished.

On the next section, we convolved the 2D gaussian Kernel with dx and dy first, then convolved that with the original image to produce the edge image. Simply put, we are doing all the procedures that we've done, but in a different order. By doing this, we just have to run the convolution a single time by the partial derivative of the kernel, thus having a single convolution procedure.

Gauss_edge_single

gauss_edge_single.png

As you can see, there is almost no difference from the previous section, thus, verifying that we get the same result.

Part 2: Fun with Frequencies

Part 2.1: Image "Sharpening"

In this part, we will be sharpening the image by obtaining the blurred image using the gaussian filter, then use the sharpen the image by using the Unsharp Mask Filter formula from lecture.

sharp_image = image + alpha * (image - convolve2d(image, G)) # alpha = 1

taj.jpg

Taj

Original

Taj_alpha1_blur

Blurred

Taj_alpha1_sharp

Sharpened

mario.jpg

Mario

Original

Mario_alpha1_blur

Blurred

Mario_alpha1_sharp

Sharpened

elon.jpg

Elon

Original

Elon_alpha1_blur

Blurred

Elon_alpha1_sharp

Sharpened

We can visually observe that the sharpening did indeed occur. Here are some images with different alpha values for elon.jpg.

Elon_alpha1_sharp

alpha=1

Elon_alpha2_sharp

alpha=2

Elon_alpha3_sharp

alpha=3

Part 2.2: Hybrid Images

In this part, we will create hybrid images by combining the low frequency portion of one image and high frequency portion of the other image. This will eventually lead to a hybrid image with different interpretations at different distance. Firstly, here are the frequency representations of two sample pictures.

DerekPicture

DerekPicture.jpg

Derek_freq

Frequency representation of DerekPicture.jpg

Nutmeg

nutmeg.jpg

Nutmeg_freq

Frequency representation of nutmeg.jpg

Then, we will perform a lowpass filter on DerekPicture.jpg and highpass filter on nutmeg.jpg. The lowpass filter is what we did like in the previous section to produce a blurred image, which is a convolution with a 2D Gaussian kernel. The highpass filter would be the subtraction of the image by the blurred image. Finally, we would combine those two to create a hybrid image.

Derek_lowpass

Low Pass Filter

Nutmeg_highpass

High Pass Filter

Hybrid_derek_nutmeg

Hybrid Image Result

As you can see, based on the distance, you might perceieve different images of this hybrid result. Here are some additional hybrid images that I've attempted.

Wolf

wolf.jpg

Dog

dog.jpg

Hybrid_wolf_dog

Hybrid of wolf and dog

Rick

rick.jpg

Morty

morty.jpg

Hybrid_rick_morty

Hybrid of Rick and Morty

Why did wolf X dog example look bad?

Hybrid_wolf_dog

Even if the eyes were aligned, the overall facial anatomy of dogs and wolves are vastly different. If two images were either completely similar or completely different structures, hybrids might be possible. But in this case, the images are sort of same but the details are different. In such case, the hybrid does not look as well as the other examples. However, with closer or farther distances, we can clearly distinguish the two images.

Part 2.3: Gaussian and Laplacian Stacks

In this part, we will implement Gaussian and Laplacian stacks to prepare for Multi-resolution blending. Here are the 12 images I've recreated by doing laplacian stacks of each image and the blended image at level 0, 2, and 4.

apple_black_collapse
Apple_lap1
Apple_lap3
Apple_lap4
Orange_black_collapse
Orange_lap1
Orange_lap3
Orange_lap5
Multires_collapse
Oraple_blend1
Oraple_blend3
Oraple_blend5

Part 2.4: Multiresolution Blending (a.k.a. the oraple!)

It's Sblended!

multiresolution blending

Multires_collapse

However, this sample image was quite easily aligned to perform blending. I can even raw-perform the alpha blending and get a even better result.

alpha blending

Apple_orange_hybrid

If the world is indeed perfect and everything is aligned, alpha blending might be the best option. However in real life, gaussian and laplacian stacks are quite flexible to perform image blending. Here are some of my examples:

Example 1 (5 levels)

River

river.jpg

Lava

lava.jpg

Blend result: OMG eruption under a bright blue sky???

River_lava_collapse

Example 2 (50 levels)

Mariojpg

mario.jpg

Luigi

luigi.jpg

Blend result: Mari...are you Luigi?

Mario_luigi_collapse