The x and y gradients of the cameraman image was created by convolving the image with Dx and Dy respectively. This is because the Dx and Dy filters take the difference between pixel values in each direction. Then the magnitude of the gradient is calculated by taking the magnitude of the dx and dy values (i.e. sqrt(dx ** 2 + dy ** 2)). Then the image is binarized such that pixels in the gradient magnitude image above a certain threshold were converted to black and the other pixels were converted to white. If the gradient magnitude is large then that means the overall change in the pixel values is large, which indicates the detection of an edge.
Below are the results of applying the gradient in the x and y directions, taking the magnitude of the gradient, and finally detecting the edges by binarizing the gradient magnitude image with a specific threshold found through trial and error.
By blurring the image, the edges became bolder and more prominent. This is due to the fact that blurring the image smooths out noise, which means it smooths out small and sudden changes in the pixel values that aren't associated with an edge. The edges also become thicker because the large change in pixel value is now spread out across multiple pixels after the image is blurred.
Below is the result of blurring the image, convolving it with the Dx and Dy filters, taking the magnitude of the gradient, and then binarizing the gradient magnitude image.
Below is the result of achieving the same thing with a single convolution by using the Derivative of Gaussian filters. The gaussian filter is first convolved with Dx and Dy, and that DoG filter is used to convolve with the blurred image.
The high frequencies of an image can be obtained by subtracting a blurred version of the image from the original, since this is equivalent to subtracting the lower frequencies from all the frequencies. Then the image can be "sharpenned" by adding these high frequencies to the original. This same process can be done with a single convolution with the unsharp mask filter, as described by the equation above. Below are the results with alpha set to 1.
With an already sharp image, the edges of the sharpenned becomes even more prominent. But since alpha is only 1 and this is an art piece, there isn't too much difference and it doesn't look unnatural.
Hybrid images were created by aligning two image, high pass filtering one image and low pass filtering another, and averaging the two filtered images together. As a result, when viewing the hybrid image close by, the human eye detects higher frequencies more, therefore seeing the high pass filtered image. When viewing from a distance, one sees the low pass filtered image. Additionally, to create a better effect, I used a different sigma for the gaussian filteres when creating the low and high pass images.
When playing with colors, it turns out that having both images in color versus having just the low passed image in color had very similar results, which was way better than having just the high passed image in color. It was even better if both images were a similar shade.
Below are the visualizations of the fourier transforms for this hybrid image.
Below are some more successful examples of hybrid images.
The following is an example that failed. Only the rabbit that is highpass filtered can be seen. The wolf that is lowpass filtered cannot.
A gaussian stack was created by repeatedly convolving an image with a gaussian filter so that each level the image is more blurry. A laplacian stack was created by taking the difference between two successive gaussian filtered images, with the last laplacian level equal to the last gaussian level. Below are the results of the left side of the apple image and the right side of the orange image.
The blended image is createed by blending together each laplacian level separately and then adding all the blended images together to get back all the frequencies in the original image.
Below are some more examples of blended images.
The following blend uses an irregular filter.
The most interesting thing I learned from this assignment was creating the hybrid photos. It was cool to learn how the human eye detects high frequencies when looking at a nearby image but it detects low frequencies when looking at a far away image. I really enjoyed thinking about creative images to make hybrid and playing with the different filters so that the low and high frequency images can be distinguished. I thought it was really interesting that simply adding the lowpass and highpass images together would show both images so well.