Student Information

Ian Albuquerque Raymundo da Silva ian.albuquerque [at] berkeley.edu

International student participating in the Brazilian Scientific Mobility Program (BSMP) at University of California, Berkeley, for the 2015-2016 academic year. Enrolled as a Computer Science Extension student. Born and raised in Rio de Janeiro, Brazil. Undergrad student at Pontifical Catholic University of Rio de Janeiro, studying Computer Engineering and Mathematics. Traditional and Digital art lover and currently interested in Computer Graphics and Artificial Intelligence - but who knows what new field of study I might fall in love with! Trying to learn the most I can during this one year here in Berkeley.

Part 0 (Warmup): Sharpening images with high pass filters

The main idea for sharpening images is using a High Pass filter to extract the details of the original image. Then, we add to the source those details, enhancing them. For the High Pass filter I have used the technique behind the Laplacian Filter. First, I use the Gaussian filter (a Low Pass filter) to blur the image. Then, I subtract that result from the original image in order to get the details. That process is the same as applying a Laplacian Filter.

For an image with colors, we repeat this process for each one of the color channels. The blurring size of the Gaussian used for the following result was 2 pixels. The result was pretty good. It is possible to see that the cat and the fence seem more vivid in the second picture.

Sharpened Image (After): (sharp_cat.jpg)

The images below are the High Frequency versions of the original image, for each color channel. They are the result of subtracting the original version of a blured version of the cat (blured with the Gaussian Filter).

Part 1: Hybrid Images

For that part of the project we will be creating a hybrid version of two images by merging the high frequencies of one with the low frequencies of the second image. Take as an example the two iconic paintings below: "The Birth of Venus" and "Mona Lisa".

Mona Lisa: (mona.jpg)

Using a gray scale version of those images, the first step for creating its hybrid is aligning them. For that, we ask the user for two points in each image as reference. Using a function provided with the assignment, the reference points are aligned.

Then, we create a High Frequency version of the first image (by subtracting the original image from a blured version of it) and a Low Frequency version of the second image. The blurring factor for the Gaussian used on the blurring of the first image was 2 pixels while the blurring used for the second image was 4 pixels. By manual trial and error those values proved to be good ones for images of the given sizes. (larger images required higher values for this factor),

The result of the alignment and the manipulation with filters is displayed below.

Mona After Filter: (gray_low_frequency_mona.jpg)

For the final image, we just add the High Frequency image and the Low Frequency image together. I have cropped the final image for better visualization. The result of this entire proccess is displayed below:

Vemona Cropped: (gray_vemona_crop.jpg)

One interesting analysis that can be made with those images is analysing them in the Frequency Domain. For that, we can see the Fourier Transform of the aligned images and the Fourier Transform of the final image. The final image should have its center from the Low Frequency image and the borders of the High Frequency image. You can see this on the images below:

Vemona Fourier: (gray_fourier_vemona.jpg)

The result is a image that looks like Venus from close and looks like Mona Lisa when seem from far away. I have called her "Vemona".

Bells and Whistles Colored Versions

For doing the same with colored images the process is very similar. Instead of making hybrid versions of only one grayscale image, we use this technique on every color channel of the input images. The result image is the image formed by the hybrid version of the channels of the original images.

I have used both images colored for this process. The colors do not blend but they add an interesting effect on the perception of the two images composed together. For that, it is necessary for the images to have similar colors and styles.

Vemona Cropped: (vemona_crop.jpg)

The colored images do not have only one Fourier Transform. They have one for each channel. The Fourier Transforms below are the result of making the Fourier Transform of every single color channel. This colored version is more interesting because you can see that the Fourier Transform of the output image has a green-ish center and blue borders. This is the result of the green center of the Mona's Fourier and the blue border of the Venus Fourier.

Results (Colored Versions)

The final colored version is displayed below. This one is my favorite composition because of the faces in the painting - they are misterious, specially together. Even with bodies that do not align well, the result is pretty good.

Vemona Far:

The following image was my second pick. The alignment here is better and the result is good. However, I find Vemona more appealing. I believe that the position of the faces and the skin tone of the characters were relevant for the final composition.

John Stark Far (john_stark.jpg)

The next result was OK-ish. The patterns of the bodies of the animals did not match very well, but it is possible to get the effect. The penguin is still distinguishable from the shark.

Penark Far (penark.jpg)

The result below was a failure. The fact that Brad Pitt's face and Angelina Jolie's face are very different messed with the perception of the picture. You can see Angelina's face from far but the face viewed from close is not clear.

Part 2: Gaussian and Laplacian Stacks

For this part of the assignment we will analyse the Gaussian Stack and the Laplacian Stack of some images. The Gaussian Stack is obtained by applying the gaussian filter repeatdly on the original images, while the Laplacian Stack is obtained by subtracting each level of the Gaussian Stack. Seeing both pyramids next to each other will allow us to see the different images hidden in different frequencies in each picture. For the coloring effect, we create the pyramids for each channel.

Elephant Charge (elephant_charge.jpg) (Credits: Julia Waltkins, http://www.platris.com/elephant.html)

This image is pretty interesting. In high frequencies we have this abstract colored painting while in low frequencies we have an elephant. And everything was painted by hand - impressing, right? You can see that the further you get in the Gaussian Stack, the more difficult it gets to see the abstract painting. In the Laplacian Stack, you can clearly see the abstract painting in the first layers and the elephant on further layers.

Vemona (vemona.jpg)

This is the analysis of our hybrid image - Vemona. You can see that the further you get in the Gaussian Stack the more close you get to the blurred version of Mona. In the Laplacian Stack, as expected, the first levels are Venus, which correspond to the High Frequencies.

John Stark (john_stark.jpg)

The John Stark is another example of this. The Low Frequencies have Ned Stark while in the High Frequencies it is possible to see John Snow.

Part 3: Multiresolution Blending

For blending images, we create the Laplacian Stack of two images and the Gaussian Stack of the mask that will be used for blending. Then, we combine the two by addding each level of the pyramid using the values of the stack levels as weight for the sum. The result of this process is a well blended image. This happens because each frequency will have a proper blending according to the size of its features.

The number of levels of the stacks used was six. The blurring ratio of the Gaussian Filters used was 2^(level). The oraple result was good. Both images blended together very well, as expected. They have similar shapes and the patterns are similar - key features to a good result.

Oraple:

The result below was very good as well. Instead of using a straight line for the mask, I have used the silhouette of Rio de Janeiro. One important thing to notice is that the input images have different sky colors. Using the mask close to an edge is kind of a "cheat" because not much blending is required. However, the result is still good. Only a straight line would be smoothly blended but would not look good because of the color difference.

SanRio:

The image below was one of the best results I had. While using a complicated mask, the blending with the lion's fur and the monkey's fur was amazing. The color is similar and their position is similar. The only issue is the lion lighting that does not match very well the scene. The key point here was the alignment of the mask and the pictures.

Milion:

Another example of a non trivial mask. The result is pretty interesting - and scary. Even very different animals can blend.

Meagle:

This is the first example that is not very good. Here, different from the Rio de Janeiro + San Franscico image, I have blended the skies of the images. As expected, the different blues of the skies make the image not look good. The blending is good but the perception is not.

Parisa:

The result below has the same problem. The texture of the water is very different. The blending is not capable of fixing that. At the same time, the color is not similar as well. This creates an image that is not convincing at all. It looks very artificial. Other techniques that match color would be necessary for a better blending.

Barjungle:

Below we can the Gaussian and Laplacian Stacks used for blending the lion and the monkey images. You can see that different frequencies use different blurried versions of the mask. This means that each feature from the image is blendend according to its size.

Bells and Whistles

All this processes were made in color.

What I Learned:

I learned that a lot can be explored by decomposing an image in its frequencies. With those techniques it is possible to manipulate images according to the size of each one of their features. At the same time, our human perception is very sensitive to the frequencies of the world we live in. Using that, it is possible to create interesting effects using the Fourier Transforms, High Pass and Low Pass filters.

Technically, the thing that I learned the most with this assignment was using filters to extract key components of images. I have also learned a lot about composing different images into one single picture.

Last, But Not Least:

Special thanks to Alexei Alyosha Efros, Rachel Albert and Weilun Sun for help during lectures, office hours and questions on Piazza.

Website built using bootstrap.