To sharpen an image, I ran the image through a gaussian filter, then subtracted that result from the original image. This left only the high frequencies of the image. To create the sharpened image, I added back these high frequencies to the original image.
For hybrid images, I added together the low pass filter of one image with the high pass filter of another. This results in an image which looks like one thing from far away, and another from close up. Here are several examples of the results:
Here is the fourier analysis of these pictures
In this portion, I created gaussian and laplacian stacks and applied them to images with interesting frequency makeups. Since each layer of the laplacian stack is like a bandpass filter, it breaks down the frequencies of the picture. This has an interesting effect on hybrid images, where it shows the levels of just the high frequencies or just the low ones, displaying a different image at different levels.
Gaussian stack
Laplacian stack
Gaussian stack
Laplacian stack
Gaussian stack
Laplacian stack
To blend two images together more seamlessly, I used multiresolution blending. For each picture, I created the laplacian stacks for them. The masks for these images look like (in the simplest case), an image that is half white and half black. Then, I applied the corresponding mask (run through a Gaussian filter) to each image through an elementwise multiplication. One gets the normal mask, and the other gets the inverse of that. Then, I combine the levels of the stack back together to reconstruct the blended image.
Laplacian stacks for each image to be blended
Other Images
The purpose of this toy example was to solve an optimization as a least squares problem to reconstruct the original image. The bulk of the work went into constructing the system of linear equations, using the x gradient constraints, the y gradient constraints, and the constraint of the top left pixel. Then, by using a least squares solver you can solve for your new pixel values and use those to reconstruct the image.
This part was very similar to part 2.1, but instead we used the gradients of each neighboring pixel for each pixel to create the blended image. Our goal here was to copy the source gradients over to the target while making it as similar to the target's gradients at that location for the most even blending. To do this, I modeled an equation Ax = b and solved for the new pixel values x. If the pixel is within the mask, I use the gradients to solve for the pixel value, and if it is outside the mask, I simply set it to the value of that pixel in the target image. It works much better than the multiresolution blending we did in part 1 because it changes the colors of the whole image rather than just blending at the seam.
Failure case: This is due to backgrounds being too different in texture. The gradients from the grass are so different from those of the moon's surface that it is very difficult to make these seamlessly blend together.
Comparing the two blending techniques, the poisson blending is generally better because it will "adjust the lighting" and allows you to create a sloppy mask including some of the background of the source image. However, in the case of the oraple, I think the Laplacian pyramid blending works better because the seam is very smooth and the lighting and style of both pictures already match.