Niraek Jain-Sharma
In this part of the project, our goal was to detect edges in images, specifically the camermaman image. To do this, the method was to use Finite Difference Operators, and convolving the image with them. We convolve the image with [1, -1] and [[1], [-1]], which are the finite difference operators. Then, using these two images, we then can calculate the gradient which allows us to tell where the intensities of pixels changed, namely higher values would be faster changes. This gradient is calculated by sqrt(df/dx^2 + df/dy^2), where df/dx and df/dy are the images obtains from the convolving described above. Finally, to create the edges, we merely needed to set a threshold at which all pixels would be set to black below it, and all pixels would be set to white above it.
The finite difference operators as previouslym entioned are D_X = [1, -1] and D_Y = [[1], [-1]]. First, let's show the Cameraman, and then the D_X, D_Y, gradient, and binarized edges.
Cameraman D_X D_Y Gradient BinarizedAs you can notice in the previous part, the edges do come out, but they're very noisy, many pixels seem "choppy." Well, we can fix this by using a Gaussian filter to blur the image, and then convolving as we did in the previous part!
D_X D_Y Gradient BinarizedWe can tell that both the D_X and D_Y look a lot blurrier, but the edges outlining the man are thicker. Moreover, the gradient looks more like a loose smoky shadow, whereas previously it comprised of distinct lines. Finally, the actual edges are a lot thicker and more prominent, and the noise from many of the pixels are removed. We can do the same thing with a single convolution by creating a derivative of the gaussian filters as well, namely convoling the gaussian with D_x and D_y and then apply it to the images. We can visualize the DoG filters, and see the following results.
DoG_X DoG_Y D_X D_Y Gradient Binarized As we can see, the resulting images are the same.This part is about "sharpening" images. When we think of sharpening, we usually mean that the edges in the picture stand out more - note that there is usually a sharpening tool on image editors like photoshop! To do this, we use this formula: f + alpha(f - f * g) which can also be written as f * (1+alpha)e - alpha * g), where e is a unit impulse. What this does is it first subtracts a convolved version of the image with a gaussian (blur) from the intial image. This gives us the high frequency part of the image. Then, we can add a scalar times this value to the original image to get crisper edges. Note that this scalar factor, alpha, can immensely change the level of sharpness.
Cathedral Cathedral Blurred Details Sharpened Image (alpha = 1)We can see this process on this Red Rav 4 as well:
Rav4 Rav4 Blurred Details Sharpened Rav4 (alpha = 1)The difference isn't as stark as in the cathedral, but it can be noticed by looking at the tree right at the front of the rav4, and comparing it with the original image.
Finally, let's see this same process on this red footed booby I took in the Galapagos Islands, with alpha = 2.
Red Footed Booby Red Footed Booby Blurred Details Sharpened Red Footed Booby (alpha = 1)The difference is clear in both the bird and the leaves.
Lastly, let us start with a blurred version of this beach picture I took in Mendocino, and then go through the process as before and sharpen it, to see if we can get a sharp image from just the blurred version.
Beach Beach Sharpened from BlurAs we can see, this sharpened version is worse because we lost much of the information we need in order to get the crisp look.
In this part, we attempt to make hybrid images from 2 images, namely combining them to look like one image, encompassing both of them. If we look closely at the image, it may look more like one of the images, but if we look from far away, it may look like the other.
The way we accomplish this is by combining the high frequency part of one image with the low frequency part of another. The Low-pass filter is merely the Gaussian convoluted with the image as we used before, f*g, and the high-pass filter of the version is f - f*g.
Let us see how the process works with this picture of Derek and a cat named Nutmeg.
Derek Nutmeg Nutek (or Dermeg)Let us look at the frequency domain analysis for this process of using low pass and high pass, using the log magnitude of the fourier transform.
Derek Nutmeg Derek (Filtered) Nutmeg (Filtered) HybridWe can specifically see in this frequency domain analysis how different areas are filtered out. For instance in Derek's must of the pixels in the background are filtered out, as we can see with the white stripes. Moreover, nutmeg's low frequency areas are filtered as well! Now let's try to make hybrid images of Gandalf and Dumbledore - after all they're both wizards and old with beards!
Dumbledore Gandalf Dumbledolf/Gandore (Hybrid Grayscale) Dumbledolf/Gandore (Hybrid Color/Bells and Whistles)It looks like Dumbledolf with a staff! Finally, let's try to make a hybrid image of a leopard and a cheetah.
Leopard Cheetah Leopah (Hybrid Grayscale somewhat Failure) Leopah (Hybrid Color somewhat Failure)In this part, we attempt to "blur" images together. To do this, we make use of Gaussian and Laplcian Stacks. As before, the Gaussian Stack is created by convolving each image with a gaussian repeatedly. Then, we get the Laplacian by taking consecutive differences between the images in the Gaussian Stack, with the last image being same as the last gaussian. I used 10 levels, i.e. 10 successive blurrings. To get the original image back, we merely sum up the Laplacians, since the sum of differences of the gaussians is telescoping.
See below for the original orange and apple images that we will eventually blend together. Of course, we could blend these images by simply taking the left part of the apple, and the right part of the orange, but then it would look like a clear line in the middle, and not "blended." Thus, we use this Gaussian/Laplacian stack method, which we will finalize in the next part.
Apple Gaussian Stack
Apple Laplacian Stack (note the last image is the same as Gaussian Stack!)
Orange Gaussian Stack
Orange Laplacian Stack
Orange Laplacian Stack
Gaussian Mask (Note how the center line gets progressively blurrier)
Apple Images w/ Mask
Orange Images w/ Mask
Complete Images w/ Mask
Now let's try some new images, for instance take these two images of a path, taken in spring and in winter, and let's combine them.
Spring Winter Mask Spring (Grayscale) Winter (Grayscale) Final Combined image (grayscale)Finally let us use a different mask, one with an oval shape, so show that the blending can work with different shape masks. We will put a lake in a desert for any thirsty animals wandering by!
Lake Desert Mask Lake (Grayscale) Desert (Grayscale) Final Combined Desert w/ Lake (grayscale)