The gradient of the image is given by convolving the image with a finite difference operator in the X and Y directions. The magnitude of this gradient is thus (D_x^2 + D_y^2)^.5, where D_x is the partial derivative of the image with respect to X and D_y is the partial derivative of the image with respect to Y. These partials, gradient magnitude, and an edge image computed by thresholding the gradient magnitude image (i.e. if the pixel is strong enough set it to 1, else 0) were performed on the cameraman image below.
|
|
|
|
To remove noise in the images above, cameraman is first convoled with a smoothing operator: the Gaussian filter (see the image below); then the procedure above is repeated.
|
|
|
|
Some details in the image are lost because of the low-pass filter that was applied to the image (most notable in the camera itself). However, the jagged lines in the images are gone, replaced by smoother lines, and very noticeable is the reduced number of artifacts in the image, especially the edge image.
Because convolutions are associative, one can convolve the gaussian filter with the derivative operator before applying the convolved operator to the image. Here, one expects - and sees below - to have the same results as above.
|
|
||
|
|
|
|
Here, an image is sharpened by adding multiples of an image's high frequencies. The high frequencies are obtained by subtracting the low frequencies (obtained from a Gaussian filter on the image) from the image.
|
|
|
|
|
|
|
|
|
Note that we are not able to add high frequencies not already present in the image. Thus, there won't be a sharpening effect in the case that a low pass filter is applied before sharpening. However, since a gaussian blur is not a perfect low pass filter, there are still some high frequencies, so there is still a slight sharpening effect. In my case, I used a relatively weak gaussian blur, so some high frequencies remained (seen in the high frequencies image).
|
|
|
To create a hybrid image, the low frequency portion of one image and the high frequency portion of another image are overlayed.
|
|
|
|
|
|
|
|
|
Consider the frequency analysis of the Walter Dom hybrid below. From this, we can see the effect of the low pass and high pass filters in fourier space. After a low pass filter, points outside of the middle and on the vertical and horizontal axes are generally dark. After a high pass filter, the points appear brighter and the middle is a bit dark. The hybrid image contains frequencies from both images. This also shows that the Gaussian filter is not a perfect low pass filter.
|
|
|
|
|
The results are not always good. When the two images are very dissimilar and don't align well, the the hybrid image will not come out well. Below, I attempted to make a hybrid image out of a car and a horse (the car is visible no matter the distance).
|
|
|
The results can be enhanced using color. I found the best results from having the low frequency image in color while the high frequency image is in gray scale (coloring the high frequency part results in little difference).
|
|
|
|
|
|
To create the Gaussian Stack, a Gaussian filter is successively applied to the input image. At each level of the stack, the filter's sigma value doubles. To create the Laplacian stack, a Gaussian stack is first computed and the Laplacian Stack at level i is given by subtracting the Gaussian Stack at level i+1 from the Gaussian Stack at level i. Below is a depiction of the laplacian stack of the oraple (orange and apple) without the multiresolution blending in the next section. Note that the last row is the sum of the Laplacian Stack.
|
|
|
|
|
|
|
|
|
|
|
|
Multiresolution blending is done by computing Laplacian Stacks of the two input images, computing a Gaussian Stack of the mask (determines which regions of the images to blend), and then level i of the output is computed by ((Gaussian Stack at level i * Laplacian Stack of the first image at level i) + ((1 - Gaussian Stack at level i) * Laplacian Stack of the second image at level i)). The final image is formed by summing the entries in this stack. This process is demonstrated below.
|
|
|
|
|
|
|
|
|
|
|
|
Below are some more examples of image blending (in color). How good the results are seems to depend how different the images are. If the images are similar (in color and shape), then the blended image turns out well.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The most interesting and important thing I learned in this project was that images can be thought of as collections of frequencies, and that those frequencies can be manipulated in very unique ways (hybrid images, image blends, etc.).