In this project, I test out some of the different ways in which we can modify and combine images through the use of filters.
Here, we display the partial derivatives of the cameraman provided in class with respect to x (left) and y (right):
The gradient magnitude image is as shown below on the left. The edge image is shown on the right, where points in the gradient magnitude image with a magnitude of at least 0.28 are set to max brightness, and all other pixels are dimmed.
At each pixel, the gradient magnitude is calculated using the standard Pythagorean theorem equation, c = sqrt(a^2 + b^2), with a and b being the magnitudes of the partial derivatives in the x and y directions. The partial derivatives in the x and y directions are calculated by convolving the images with the "finite difference operators": [1, -1] and [1, -1]^T for x and y, respectively.
Here are the partial derivatives of the blurred cameraman provided in class with respect to x (left) and y (right):
Here are the gradient magnitude image and edge image:
Below, we create "Derivative of Gaussian (DoG)" filters in the x and y directions so that we only need to run one convolution on the image in order to achieve the same results in each direction. The resulting DoG filters are shown below, with the x-direction filter on the left, and the y-direction filter on the right.
The resulting gradient magnitude image and edge image are shown below.
"What differences do you see?": Using the Gaussian filter to blur the image before calculating the derivatives results in much smoother partial derivatives, gradient magnitude images, and edge images.
"Verify that you get the same result as before.": To verify that we get the same result with the combined filter as when applying the filters separately, I took the L-2 norm of the difference between the two gradient edge images (compared as vectors) and found the result to be 0. There was a slight difference between the gradient magnitude images, however, presumably due to floating point errors.
Here, we use the unsharp mask filter to sharpen an image of the Taj Mahal (original on left, sharpened on right).
I've also used this filter to sharpen other photos. Here's the sharpening of a photo of my dog, Milo (I call him Miller) (original on left, sharpened on right)
Here's a self portrait that I took of myself in front of a shop, which also happened to have a mirror on the inside (original on left, sharpened on right)
Finally, here's a photo of a sign in my hometown that I first blurred, then resharpened (original on top, blurred in middle, resharpened on bottom).
In this part of the project, we create hybrid images by extracting the high frequencies of one image, the low frequencies of another, then combining the two into one image. This is done using a high-pass filter, and low-pass filter, respectively.
The use of color in hybrid images is a bit tricky; when only one image is used as a source for color, the resulting image may end up very unsaturated (though, this may be countered by increasing the saturation of the colored photo). If one image is chosen to be colored, it is better to choose the high-frequency image--colors from the low frequency image can still be seen from up close, making the image look a bit weird. Sometimes, combining the colors works (e.g. the photo below with Kanye and Kanye), but only if the two images already have similar colors.
Here's the combination of a professor, Derek Hoiem, and his cat.
Original Photos:
Hybrid Photos:
Original Photos:
Hybrid Photos:
Here's the Fourier analysis for the black and white hybrid photo:
Original Photos (I was going to resize the first picture of Kanye, but I find it hilarious that he just fills the screen):
Hybrid Photos:
Finally, here's an example of an unsuccessful hybrid image between Kanye west and Kermit the Frog. Since the two images don't have very many common features, its natural to expect that these two images wouldn't make a very good hybrid image.
In this section, I create Gaussian and Laplacian stacks. They are similar to Gaussian and Laplacian pyramids, except that we don't downsample across the layers.
Images are blended together in section 2.4.
Here are the original orange and apple images, provided in class:
Here are the Gaussian and Laplacian Stacks for the orange and apple::
Black and White:
In Color:
Here's another example with my own images:
Originals:
And the Gaussian & Laplacian Stacks:
Black and White:
In Color:
Finally, we take the Laplacian stacks and use them to blend the images, with the help of masks (which we also create Gaussian & Laplacian stacks for).
Here's the blended Oraple!
Originals:
Results:
Here's the Laplacian stack for the blending of the Oraple:
Black in White:
In Color:
Here's the blend of my self portrait and the street sign (original photos same as in section 2.3):
This blend uses an interesting filter, where the masking values are not binary, and the boundary is elliptical:
With the masking values being non-binary, I'm able to achieve somewhat of a "reflection" effect, since both images are somewhat visible throughout the entire photo
Here's the laplacian stacks used in the blend:
Black and White:
In Color:
Here's some more images that I blended together, the stacks used to create them, and the final results.
Originals:
Blended:
Individual Gaussian and Laplacian Stacks, in Black and White:
Individual Gaussian and Laplacian Stacks, in Color:
The Blended Laplacian stacks, in black and white:
The Blended Laplacian stacks, in color:
Here, I switch around the ordering of the blend with the same two images to get another interesting result.
The Blended Laplacian stacks, in black and white:
The Blended Laplacian stacks, in color:
The coolest thing I learned from this assignment was how images can be thought of as a linear combination of frequencies, and the way that we can take advantage of these different frequencies to blend images together in such a way that feels natural. As a photographer, it was super satisfying to blend photos that I've taken myself.
Quotes, Quoted Headers: https://inst.eecs.berkeley.edu/~cs194-26/fa21/hw/proj2/
Nicholas Dirks: https://history.berkeley.edu/nicholas-dirks
Carol Christ: https://www.smith.edu/president-carol-christ