For my Gaussian filter, I chose a kernel size of 5 and a sigma value of 1.
The edges of the partial derivative images on blurred_cameraman look much stronger and smoother with fewer visual artifiacts than the edges of the partial derivative images on cameraman without blurring.
Here, the edges of the edge image are much bolder, and there's less noise in the bottom of the image.
Now, I first convolve my Gaussian with D_x and D_y, respectively, to get DoG filters.
I can see that the gradient magnitude image and edge image using DoG filters are the same as the gradient magnitude image and edge image using a blurred version of cameraman convoled with the Gaussian.
A simple way to "sharpen" a blurry image is to add more high frequencies to the original image. To get the high frequencies of an image, you can convolve the original image with a Gaussian filter, which is a low pass filter that retains only the low frequencies, to obtain a blurry version of the original imgage. Then, you can subtract the blurred version from the original image to get the high frequencies of the image. From there, you can scale the high frequencies by any constant alpha and add them to the original image to get a more "sharpened" image.
You can do all this sharpening in a single convolution operation, which is called the unsharp mask filter. unsharp mask filter = f * ((1 + alpha) * e - alpha * g), where e = unit impulse (identity) filter, f = image, g = Gaussian. I used this unsharp mask filter to get the following "sharpened" images using various values of alpha.
To create a hybrid image of two images, you can extract the low frequencies of one image by convolving with a Gaussian filter and the high frequencies of the other image by subtracting the blurred version of this image convolved with the Gaussian from the original image. Then to get the hybrid image, you just add the blurred version of the one image and the high frequencies of the other image together.
To create a Gaussian stack, I stored the original image in the stack first. Then I convolved the previous image with a Gaussian to get the image for the next level. For each level, I doubled sigma.
To create a Laplacian stack for an image, I first computed the Gaussian stack for the image. For each level i of the Laplacian stack, the Laplacian image is the level i+1 image of the Gaussian stack subtracted from the level i image of the Gaussian stack. For the last level of the Laplacian stack, the Laplacian image is equal to the last level image of the Gaussian stack.
Left column is Gaussian stack for apple. Right column is Laplacian stack for apple, and I normalized the images. Used kernel size = 45, sigma = 2, and 5 levels.
Left column is Gaussian stack for orange. Right column is Laplacian stack for orange, and I normalized the images. Used kernel size = 45, sigma = 2, and 5 levels.
To blend two images, left_image and right_image, using a mask, I first created left_image_laplacian_stack and right_image_laplacian_stack. I then created the Gaussian stack for the mask, mask_gaussian_stack. For every level i, I calculated left_blended_image = mask_gaussian_stack[i] * left_image_laplacian_stack[i] and right_blended_image = (1 - mask_gaussian_stack[i]) * right_image_laplacian_stack[i]. With the final blended_image being an accumulative sum of blended_image += left_blended_image + right_blended_image.
Oraple using kernel size = 45, sigma = 2, and 5 levels.