Image Sharpening

Using an unsharp marking technique, I played around with sharpening images using different alpha values on a Laplacian of a Gaussian transformation. Below are some examples of the resulting pictures.

Original Image

Alpha = 0.15

Alpha = 0.5

Original Image

Alpha = 0.50

Alpha = 1.4

Part 1 - Hybrid Images

Hybrid images are static images that change in interpretation as a function of the viewing distance. For this part of the project, the goal was to create these hybrid images using a variety of different images ranging in color from greyscale to fully colored RGB ones. To achieve this goal, I utilized the fact that high frequency tends to dominate perception when it is available, however, as the distance from the image increases, only the low frequency part of the signal can be seen. So, by blending the high frequency portion of one image with the low-frequency portion of another, I can get a hybrid image.

For this part, I wrote code that returns a hybrid image H composed of a high pass filter of image I_1 with a low pass filter of image I_2 using this formulation:
H = I_1 * (1 - G_1) + I_2 * G_2.

Kings of the Wilderness (Failure Example)

~~~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~ ~~~~~~~ ~~~~~~~~~~~~ ~~~~~~~~~~~~~ ~~~~~~~~~~~~ ~~~~~~~~~~~~ ~~~~~~~~~~~~~ ~~~~~ ~~~~~~~~~ ~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~ ~~~~~~~~ ~~~~~~~~~

Best Friends

~~~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~ ~~~~~~~ ~~~~~~~~~~~~ ~~~~~~~~~~~~~ ~~~~~~~~~~~~ ~~~~~~~~~~~~ ~~~~~~~~~~~~~ ~~~~~ ~~~~~~~~~ ~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~ ~~~~~~~~ ~~~~~~~~~

All Grown Up

~~~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~ ~~~~~~~ ~~~~~~~~~~~~ ~~~~~~~~~~~~~ ~~~~~~~~~~~~ ~~~~~~~~~~~~ ~~~~~~~~~~~~~ ~~~~~ ~~~~~~~~~ ~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~ ~~~~~~~~ ~~~~~~~~~

Frequency Analysis

To analyze the results, I applied the log magnitude of the Fourier transform on the Spongebob images. The 2D Fourier transform was applied to the Spongebob and Patrick input images, the intermediate high pass and low pass filtered images as well as the final result. The resulting process is illustrated here:

Gaussian and Laplacian Stacks

To further analyze the results, I computed Gaussian and Laplacian Stacks for my favorite images. This process helps highlight the pixels that are visible at different frequencies, and really neatly breaks up the hybrid image back to it's original components. The resulting process is illustrated here:

I thought that the hybrid of the lion and the tiger images was the least successful (and I think it was due to the fact that both images are quite similar so it was hard to tell if the hybrid image succeeded), so I decided to apply the Gaussian and Laplacian stacks to this set of images as well. It becomes much more evident that this image is composed of two distinct images of a lion and a tiger with this set of transformations:

Finally, I decided to apply this transformation to a famous painting by Salvador Dali called Lincoln in Dalivision which is a hybrid painting of Lincoln and Dali's wife, Gala. The image is a hybrid of two images, Gala Contemplating the Mediterranean Sea and The Portrait of Abraham Lincoln:

Hybrid Images with Colored Photos

Before moving on to the next part, I decided to experiment with the effects of color on the hybrid images. I blended together these two photos of a skeleton and Mona Lisa:
The results were really interesting, with the colored image showing more prominantly in each case where only one image was in color. I thought the best result was the one where only the front image was colorful and he back image was in black and white.

Greyscale

Back Image Colored

Front Image Colored

Full Color

Multiresolution Image Blending

After numerous iterations on my algorithm, including some pretty entertaining failures, I was able to recreate successful multiresolution blending of the example images first in black and white and finally in color:

Now that I was getting nice blended images, I could focus on recreating images in colors as suggested in the bells and whistles section to enhance the effect. In addition, to reduce the ghosting effect, I changed the mask from a step function across the entire image to a more concentrated step function focused on the blending edge:

Next, I deiced to play around with more interesting masks, moving away from the horizontal mask from above:

Finally, I illustrated the blending process by saving incremental applications of the Laplacian stack on my favorite set of images and the mask. Here are the results:

Part 2 - Gradient Domain Fusion

For this part of the project, I designed an algorithm that will perform seamless editing and blending of different image regions. Below, I illustrate the process used with my favorite blended results. First, begin with two desired images, one will be labeled the target image (the background image where we will overlay an image), and the other will be the source image:

Next, perform pre-processing on both images to align their dimensions. In addition, create a mask of the source image by selecting a polygon outline around the desired shape and specify the desired location of the source image atop the target one:

Now to the fun part! I have set up an optimization by formulating my objectives as a least squares problem. By finding values for the target pixels that maximally preserve the gradient of the source region without changing any of the background pixels across each color channel, I solve for each new pixel values, "v" with this equation:

With some final post processing and reshaping of the result arrays for each color channel, in addition to re-stacking the channels into one matrix, we achieve the final blended result. The naive pasting approach is also shown as a baseline comparison:

Toy Problem - Part 2.1

For the first part, I began by working with a small example to illustrate gradient domain processing and correctly figure out the math involved. I achieved this by computing the x and y gradients from an image s and then using all the gradients, plus one pixel intensity, to reconstruct an image v. Here are the original and reconstructed images side by side:

Poisson Blending - Part 2.2

Following the process described above, I experimented with many different images. Below are some of my favorite results!

Straight Out of Space

Target Image

Source Image

Naive Overlay

Final Blended Result

On Top of the World

Target Image

Source Image

Naive Overlay

Final Blended Result

Space Race

Target Image

Source Image

Naive Overlay

Final Blended Result

The Earthlings are Watching

This one is my personal favorite!

Target Image

Source Image

Naive Overlay

Final Blended Result

Tea Time

This is an example of an image that didn't work too well. The color difference between the tea water and the blue swimming pool is pretty dramatic, which results in a very significant and unrealistic change in the skin tone of the swimmers. Even though the gradients are preserved, the outcome of the final blend is unrealistic. I still thought the concept of this photo was pretty neat, so it was fun to play with.

Target Image

Source Image

Naive Overlay

Final Blended Result

Mixed Gradient Blending

Using a similar approach to Poisson Blending, I implemented mixed gradient blending using this equation:

Here, intensities from both the source and target image are taken into consideration when blending the images, which sometimes results in better blends. This is particularly obvious when adding objects with holes, or partially transparent ones on top of a textured or cluttered backgrounds.

The Office


Naive Overlay

Poisson Blending

Mixed Gradient Blending

Did You Say Wine?


Naive Overlay

Poisson Blending

Mixed Gradient Blending

Comparing Results of Poisson Blending with Multiresolution Blending

Using an irregular mask, I compared the results of Poisson blending to multiseolution blending. In this case, the multiresolution blend preserves the colors more accurately (as expected) while the Poisson blend makes the image look more like a reflection. I think in this case the multiresolution approach actually works better because for this specific set of image, we want the mask to be fullly filled by the source image since it looks better with the sunglasses. Poisson blending would work better when it's really important for the edges between the source and target image need to be smoothly transitioned.

Target Image

Source Image

Mask

Naive Overlay

Poisson Blending

Multiresolution Blending

Final Thoughts

This project was really awesome. It helped me grasp the concepts behind hybrid images and Poisson blending through experimentation. It was also really neat to demystify the scary looking math into concrete linear algebra that actually made sense. I think my favorite part was getting everything to work after finally working through all the intricate details and then being rewarded with some really awesome images and blends.