1.1 - Image Sharpening

For this part of the project, we had to sharpen an image by taking the image, blurring it, subtracting it from the original image to extract the detail from the image, and then adding back a scalar multiple of that detail to the image to "sharpen" it. This is the unsharp masking technique discussed in class. The sigma used for my gaussian blur was 15. The alpha used in the detail was 0.45. On the left, you will see the unsharpened image. On the right, the sharpened image is shown.

cats unsharpened cats sharpened

1.2 - Hybrid Images

For this part of the project, we had to blend images to create interesting hybrids by taking one image and low-passing it (applying a blur on it), taking another image and high-passing it (by only extracting the detail using the technique in 1.1), and then averaging the two together. The sigma I used for the lowpass was 6, and the sigma for the high-pass was 8.

Tiger Woods

Here is the FFT transforms of both of the source images before blending them

Tiger FFT Woods FFT

Here is the FFT transforms of the high pass (left) and the lowpass(right)

Tiger High pass FFT Woods Lowpass FFT

Here is the blended image and its FFT

Tiger Woods hybrid FFT

Here is another example, blending Lionel Messi and Cristiano Ronaldo

ronaldo messi hybrid

And here is a failure, trying to blend Mitch McConnell with a Turtle; this one almost certainly failed because both the alignment and the sizes of their faces were completely off, resulting in a very poor blend.

turtle mcconnell hybrid

1.3 - Laplacian and Gaussian Pyramids

For this part of the project, we had to create Gaussian and Laplacian Stacks. Gaussian stacks were created by taking an image and repeatedly applying a gaussian blur of increasing magnitude onto it at each layer (without downsampling the image), and Laplacian stacks are created by taking the difference of the image at the current level of the gaussian stack and the image at the previous level of the gaussian stack. The last layer of the gaussian stack must also be the laplacian stack's last layer. With 5 layers in the gaussian stack, starting from the source image and with sigmas of 1, 4, 9, and 16 successively applied...here are the results

Salvador Dali's Lincoln: Gaussian Stack

Lincoln Lincoln Lincoln Lincoln Lincoln

Salvador Dali's Lincoln: Laplacian Stack

Lincoln Lincoln Lincoln Lincoln Lincoln

Mona Lisa: Gaussian Stack

Mona Lisa Mona Lisa Mona Lisa Mona Lisa Mona Lisa

Mona Lisa: Laplacian Stack

Mona Lisa Mona Lisa Mona Lisa Mona Lisa Mona Lisa

Hybrid Image: Gaussian Stack

Hybrid Image: Laplacian Stack

1.4 - Multiresolution Blending

For this part of the project, we blended two images together by masking a certain portion of one image and blending the corresponding part of the other image through that using the gaussian and laplacian pyramids of the mask and both images. The equation used is the same as that in the paper: LS[l] (i, j) = GR[l](i, j)LA[l](i, j) + (1 - GR[l](i, j))LB[l](i, j), where (i,j) are coordinates of the image and l is the layer in the stack, and the full image is reconstructed through summing up the laplacian stack constructed by the equation above. I did this part in color (bells and whistles)

The Orapple

Winter vs Autumn

Zlatan watching you with Rooney's eyes (Irregular Mask)

The hand and the eye (Irregular Mask)

Part 2

For this part of the project, the goal was to explore gradient domain processing; this would enable us to splice different portions of images together and blend them seamlessly without making it immediately obvious from the image structure/seams that they are actually from two different images. This was done by attempting to keep the gradients of the portion copied from source image the same while making the border pixels around the source match the border pixels of the target image. This is called Poisson blending.

2.1 - We begin with a toy problem, in which we try to reconstruct a gray image of Woody and Buzz

2.2

My favorite result, enclosed below, shows the US Capitol and the moon spliced into a segment above the Capitol. First, I generated the appropriate masks through the starter code provided. Then, I constructed a pixel mapping, in which each pixel mapped to an integer, and then kept the pixels corresponding to the masks. I then constructed a series of least squares equations for the x and the y gradients, identified the border pixels through brute-force for-loop methods, added additional constraints on the border pixels matching the corresponding target mask area exactly, plugged them all into a sparse matrix, and then called the sparse least squares solver on it. Then, I spliced the result into the mask on the target image. The images, in order, are: The target (the Capitol), the source (the Moon), the non-blended concatenation of the images, and then the blended result.

Here is a result of a penguin skiing. On the left is the target, followed by the source, the spliced (non-blended) image, and then the blended image. Note the clear gray seam around the penguin in the non-blended image that disappears after blending

Here is a result of a tiger chasing a kiwibot -- this did not go as well. On the left is the target, followed by the source, the spliced (non-blended) image, and then the blended image. Unfortunately, because the background was so different in texture, layout, and composition, the blending process did not go nearly as well as hoped.

Here are some more poisson blending results

Now, we try using poisson blending on the irregular mask image of a hand and eye blended together in part 1.4. On the left is the result of multiresolution blending, and on the right is the result using poisson blending. We see that because poisson blending attempts to match the edge to the background colors of the target, the color of the eye changes as a consequence of the attempt to change the border colors to match the hand colors while maintaining the gradients of the source image as best as possible. Poisson blending seems to be good if we want something to blend in seamlessly into the background. Multiresolution blending, which takes both images and literally blends them both together without matching background colors -- it blurs the edges to create a smooth transition instead. So, if we want to create a blend of two images in which we wish to preserve the properties of the source image, multiresolution blending is better. This was actually the part of the project I enjoyed the most -- understanding how different methods of blending work and when one is better than the other was really enjoyable to learn about.