The gradient magnitude = sqrt((dimg/dx)**2 + (dimg/dy)**2)
Part 1.2: Derivative of Gaussian (DoG) Filter
Q: What differences do you see?
Only Dx/Dy
Blur and Dx/Dy
Dx
Dy
A:
I find there are less visual artifacts on the edges and somehow these edges are more connected.
And the ground's part without blurring is more like pepper noise. But after blurring,the ground looks smoother and visually pleasant.
Q: Verify that you get the same result as before.
Blur and then Dx/Dy
img * (gaussian * Dx/Dy)
Dx
Dy
A:
They are almost the same.One interesting thing is: I originally use cv2.filter2D(gaussian, cv2.CV_32F, Dx) to convolve the original image.However, I found that the result is always very different from using signal.convolve2d.
I think this is caused by: The convolution associativity only holds when using full mode convolution (cv2.filter2D is always the "same" mode)Otherwise, the associativity will not hold. (The easy way to think of it is by polynomial multiplication: (f*g)*h == f*(g*h))
Part 2: Fun with Frequencies!
Part 2.1: Image "Sharpening"
I use: (1+alpha)e - alpha G to compute the Laplacian of Gaussian
Original
alpha = 1
alpha = 2
alpha = 3
taj.jpg
Q: Also for evaluation, pick a sharp image, blur it and then try to sharpen it again. Compare the original and the sharpened image and report your observations.
The blurred image in frequency domain clearly gets rid of high frequency components
You can notice the Adidas image that is filtered by Laplacian of Gaussian in frequency domain has larger white region because its lower frequency components are removed by the high-pass filter and high frequency components are strengthened.
It is hard to align owl and parrot well because they stand in a slighly different posture.
Part 2.3: Gaussian and Laplacian Stacks & Part 2.4: Multiresolution Blending (a.k.a. the oraple!)
Orange and Apple
Orange
Gaussian stacks
Laplacian stacks
depth = 0
depth = 1
depth = 2
depth = 3
depth = 4
Apple
Gaussian stacks
Laplacian stacks
depth = 0
depth = 1
depth = 2
depth = 3
depth = 4
NOTE: The last level of Laplacian is the same as the Gaussian's last one. But I apply normalization. That is why they look different to each other.
I implemented the algorithm according to the algorithm presented on the paper and the lecture slide:
By Car + Me example, I find that merging objects that contain too different textures (metal and skin) is hard and you can still still easily tell this is not natural.
From Parrot + Alpaca, I find that stitching these two objects seem more realistic. I think perhaps that is because they have similar textures (they both have feathers).
Conclusion
From this homework, I learn that:
Human perception of whether a thing is natural or not is highly related to frequency domain!
How to manipulate images in frequecy domain! Before this homework and related lectures, I don't know we can maninuplate image in such way!