CS194-26 Project 2 Fun with Filters and Frequencies!

Daniel Lin

Part 1: Fun with Filters

The first part of this project is to show how we can use gradients and convolutions to detect edges and straigthen images

Part 1.1 Finite Difference Operators

The first thing we have to do to detect edges is to detect the gradients of each images to see the directions of the colors. Below is the original image and the edges.
Camerman
Camerman X_Derivative
Camerman Y_Derivative
After computing the gradients, we compute the gradient magnitude, and enhance the image by setting a threshold such that pixels above the threshold are mapped to 1 and the ones under are mapped to 0. Lower thresholds keep more edges but have more noise. Higher ones still show most edges but not as much as the lower ones.
Camerman Gradient Magnitude
Camerman Gradient Magnitude 0.08 Threshold
Camerman Gradient Magnitude 0.25 Threshold

Part 1.2 Derivative of Gaussian (DoG) Filter

We see that even though we can get pretty good edge detection from the methods above, the images were rather noisy. However, we can enhance the edges by first blurring the image by convolving with a Gaussian and then running the same algorithm as above.
Camerman Blurred
Camerman Blurred X_Derivative
Camerman Blurreed Y_Derivative
Similar to above, we can use different thresholds and different filter size to get different images with more visible edges
Camerman Example Blurred Gradient Magnitude
Camerman with big kernel small threshold
Camerman with small kernel big threshold
To save some computation time, we can precompute the gaussian filter convolved with the gradients and then convolve the filter with the image. This way, we do not have to convolve 2 separate filters with the original image.
Gaussian Kernel X Derivative
Gaussian Kernel Y Derivative
Cameraman small kernel computed differently

Part 1.3 Image Straightening

Now that we can detect edges properly, we might be able to straigten images. For many things in this world, a straight image maximizes the number of vertical and horizontal edges. In each of the next sections, we will display a before and after of the image, each with the original image, the edges, and a histogram of what the gradient angle is at each visible gradient edge pixel. Vertical edges have angles that are closer to 90 degrees, and horizontal edges have angles that are closer to 0 or 180 degrees.

Rotated Building

Original
Original Edges
Rotated
Rotated Edge

Vacuuming a Wall

Original
Original Edges
Rotated
Rotated Edge

Joker at the Hospital

Original
Original Edges
Rotated
Rotated Edge

Po

Original
Original Edges
Rotated
Rotated Edge

X-Patterns

In the following image, our assumption that an image with the most amount of straight lines means that it is straightened and the best quality is challenged because we are using x patterns. Thus, image straightening does not work as well here
Original
Original Edges
Rotated
Rotated Edge

Part 2: Fun with Frequencies

In the previous part, we showed that we could use gradients and edge detectors to find the edges of the image. In this part, we will play around more with frequencies so that we are able to sharpen and combine images.

Part 2.1 Image Sharpening

In this part, we try and sharpen an image. We do that by first calculating the low frequencies and subtracting them from an image. Then we amplify the high frequencies and put them back into the image. We will show the transformation of each example image below.

Taj Mahal

Original
Blurred
High Frequencies
Sharpened

Joker Burns Money

Original
Blurred
High Frequencies
Sharpened
Now on the same photo, we take the sharpened one, blur it, and resharpen it even more to see the results.
Original
Sharpened
Sharpened and Blurred
Sharpened, Blurred, Resharpened
We see that when we blur an image after we resharpen it, we do not get the exact copy of the original. This is because the sharpened image already has the high frequencies sharpened, so those frequencies will still stand out a little more. We see that when we resharpen blur and resharpen an already sharp image, it has some but less effect on sharpening because it only sharpens what it already has a little more.

Part 2.2 Hybrid Images

In this part of the project, we create hybrid images by combining images by using the high frequency images of one image and combine it with the low frequencies of another.

Derek and Nutmeg

We start off with Derek, a former professor, and Nutmeg, his cat to see how the image stacks up.
Original Images
Derek
Nutmeg

We then convert the images to the frequency domain.
Frequency Images
Derek Frequency
Nutmeg Frequency
Derek High Frequency
Nutmeg Low Frequency
Combined Frequency
Combining the filters, we arrive at our final image.
Final Image:
Derek and Nutmeg Hybrid

Steven and Pastor


Original Images
Steve
Pastor

Final Image:
Steve and Pastor Hybrid

Yoda and Baby Yoda


Original Images
Yoda
Baby Yoda

Final Image:
Yoda and Baby Hybrid

MrBeast and the Most Liked Egg

Original Images
MrBeast
Egg

Final Image:
MrBeast and Egg hybrid
Our final image demonstrates somewhat of a failure case. Even though the egg almost disappears at farther ranges and comes more visible in closer ones, MrBeast is almost always visible, especially his hair. This shows some of the limitations as it cannot completely blur out objects when they are of different shapes.

Part 2.3 Gaussian and Laplacian Stacks

In this part of the project, we create hybrid images by combining images by using the high frequency images of one image and combine it with the low frequencies of another. We will be looking through the different layers of the stack.

Lincoln

Original
Original Image
Gaussian Stack
Depth 0
Depth 1
Depth 2
Depth 3
Depth 4
Depth 5
Laplacian Stack
Depth 0
Depth 1
Depth 2
Depth 3
Depth 4
Depth 5

Derek and Nutmeg

Let's see if we can apply the same Laplacian stack with our fun photo from above, Derek and Nutmeg.
Original
Original Image
Gaussian Stack
Depth 0
Depth 1
Depth 2
Depth 3
Depth 4
Depth 5
Laplacian Stack
Depth 0
Depth 1
Depth 2
Depth 3
Depth 4
Depth 5
We see that for the most part, we can see Nutmeg clearly. We can also see Derek pretty well in the first Laplacian image, signifiying that the filter amplifies high frequencies more.

Part 2.4 Multi-Resolution Images

In this part of the project, we blend together two images by creating a mask and then finding the individual edges of each image through a Laplacian stack, and then we morph them together through a Mask.

Orapple

Original Images and Mask
Apple
Orange
Mask
Orapple Laplacian Stack
Depth 0
Depth 1
Depth 2
Depth 3
Depth 4
Depth 5
Orapple in Color
Final Blend

Burning New York

Original Images and Mask
New York City
Sunset
Mask
I created the mask through finding the Laplacian stack of the nyc image, and then I found the boundary points in which the skyline touched the sky, and I mapped everything under that point to be part of the mask, while every point above would be left for the sky and the background.
NYC Laplacian Stack
Depth 0
Depth 1
Depth 2
Depth 3
Depth 4
Depth 5
NYC in color
Final Blend

Plane over Ocean

Original Images and Mask
Airplane in the Sky
Ocean
Mask
I created the mask through finding the Laplacian stack of the nyc image, and then I found the boundary points in which the skyline touched the sky, and I mapped everything under that point to be part of the mask, while every point above would be left for the sky and the background.
Plane and Ocean Laplacian Stack
Depth 0
Depth 1
Depth 2
Depth 3
Depth 4
Depth 5
Airplane in Ocean
Final Blend