CS 194-26: Image Manipulation and Computational Photography


Fun With Frequencies and Gradients

By: Alex Pan


Image Sharpening

As a warm-up for the rest of this project, we will start by performing a relatively simple process: sharpening images. To do this, we will use the unsharp mask filter technique:

unmask

The basic idea is to use a Gaussian filter to create a blurred image, which we subtract from the original image to get a 'mask'. This mask acts as a high pass filter which contains high frequency information, namely the edges and outlines in the picture. If we add this mask to the original image, it emphasizes the edges and makes the image appear sharper.

For the images below, we used a sigma value of 5 for the Gaussian filter. The amount we want to sharpen the image is controlled by a variable alpha (a), which is the weight of the mask. As you'll be able to see below, a higher alpha corresponds to a greater sharpening effect. Past a certain point (around a = 2 or so for this image), the sharpening effect is too extreme and the output becomes distorted.

original

Original Image

mask

Unsharp Mask


0.5

a = 0.5

1

a = 1

1.5

a = 1.5

2

a = 2 (original)

5

a = 5 (original)

10

a = 10 (original)



Hybrid Images

Hybrid images are static images that change in interpretation as a function of the viewing distance. The basic idea is that high frequency tends to dominate perception when it is available, but, at a distance, only the low frequency (smooth) part of the signal can be seen. By blending the high frequency portion of one image with the low-frequency portion of another, you get a hybrid image that leads to different interpretations at different distances.

To compute this, we do a low-pass filter on one image, a high-pass filter on the other image, and add them together. The sigma values for each image varies, so we manually pick the ones that produce the best overall result. The dog-bear hybrid is my favorite, so I included the Fourier analysis for that process. Also, we display the hybrid images at different sizes to simulate looking from closer or farther away.

Dog-Bear

Dog (Original)

Dog (Low Pass, sigma = 6)

Bear (Original)

Bear (High Pass, sigma = 5)


Hybrid Image (Close Up)

Hybrid Image (Far Away)


Dog-Bear (Fourier Analysis)

Dog (Original)

Dog (Low Pass)

Bear (Original)

Bear (High Pass)


Hybrid Image


Frisbee-Pizza

Frisbee (Original)

Frisbee (Low Pass, sigma = 7)

Pizza (Original)

Pizza (High Pass, sigma = 2)


Hybrid Image (Close Up)

Hybrid Image (Far Away)


Jimmy-Beau

Jimmy Mickle (Original)

Jimmy Mickle (Low Pass, sigma = 3)

Beau Kittredge (Original)

Beau Kittredge (High Pass, sigma = 4)


Hybrid Image (Close Up)

Hybrid Image (Far Away)


Car-Banana (Failure Case)

For this specific hybrid image, the result did not turn out well at all. This is most likely due to the fact that the shapes of the objects are so different, and they don't overlap well. Both images are clearly visible in the high frequency domain, so there is not really an illusion that there is a car hidden in low frequencies (whereas the others have a better 'morphing' effect at different viewing distances).

Car (Original)

Car (Low Pass, sigma = 6)

Banana (Original)

Banana (High Pass, sigma = 5)


Hybrid Image (Close Up)

Hybrid Image (Far Away)


Bells and Whistles: Adding Color to Hybrid Images

As an extra feature, we will try adding color to the hybrid images to determine how it affects the output. There are three options we can do: color the low-frequency component, color the high-frequency component, or color both. We will show each of these methods on a few of the hybrid images above and assess the results. In both cases, we find that adding color to the hybrid images diminished the 'transformative' effect of the image.

Dog-Bear (Color)

For this hybrid image, the results don't turn up so well because the high frequency image (bear) is mostly black and white and doesn't hold much color. Therefore, coloring the high frequency doesn't change much. Coloring the low frequency causes the 'dog' to show up very well in both frequency channels, which ruins the illusion. Coloring both has the same effect as coloring the low frequency image. We can see that the image with the best 'hybrid'-ness is the original (both images grayscale).

Both with no color

Both with color

Low frequency colored

High frequency colored

Frisbee-Pizza (Color)

With this hybrid image, the effects of colorization are clearer due to the vibrancy of both pictures. We still find that having a colored low-frequency is overpowering and ruins the illusion. Coloring the high frequency in this case does cause the pizza to pop, but it is visible enough as is and doesn't need to stand out more. Coloring both causes a bit of a confusing color mess. The original option with no color still works better than the rest.

Both with no color

Both with color

Low frequency colored

High frequency colored



Gaussian and Laplacian Stacks

In this part, we will implement a Gaussian and a Laplacian stack. In a stack, images are never downsampled so the results are all the same dimension as the original image, and can all be saved in one 3D matrix (if the original image was a grayscale image). To create the successive levels of the Gaussian stack, we just apply the Gaussian filter at each level, but do not subsample. To create the successive levels of the Laplacian Stack, we just subtrack the Gaussian filter at each level from the original image. For the two stacks we will showcase, we chose to have 5 levels in each stack. As you can see, the low frequency image is extraced at lower levels in the Gaussian stack, where the high frequency image becomes more visible as we go down the Laplacian stack.

Salvador Dali's Lincoln

Gaussian Stack

Laplacian Stack



Dog-Bear

Gaussian Stack

Laplacian Stack



Multiresolution Blending

In this part of the project, we will blend two images seamlessly using a multi resolution blending as described in the 1983 paper by Burt and Adelson. An image spline is a smooth seam joining two image together by gently distorting them. Multiresolution blending computes a gentle seam between the two images seperately at each band of image frequencies, resulting in a much smoother seam. Here is the general approach behind executing this concept, taken from page 230 of the paper referenced above.

blending

One minor adjustment is that we will be using stacks instead of pyramids, but the algorithm above still applies. The gaussian filter is used to 'soften' the edges of the mask, so the seam between the two images will be gradual and not very stark. The high frequencies extracted by the laplacian filters on the images are combines with the blurred mask to smooth the differences out. If we sum the result at each level in the stack, we will get a resulting image with a much smoother transition from one image to another. Here are some results!

Orapple

orange

Orange

apple

Apple

Mask


Blended Image


Olaf on the Beach

Beach

Olaf making a snow angel

Mask


Blended Image


Selena Gomez With Jonah Hill's Face

Selena Gomez

Jonah Hill

Mask


Blended Image


Bells and Whistles: Adding Color to Multiresolution Blending

Just like we did for hybrid images, we will add color to our multiresolution blended images to see how color can enhance the effect of the image. We did this by processing each channel separately, and then adding them up at the end. With each colored image, we will analyze how the color affects the output.

Olaf on the Beach: Colored

The technique we used to blend the images does nothing to adjust colors, which is apparent in this example. Although the sand and snow are blended well, the result still looks abnormal because the stark difference of color. The blue-ish snow sticks out against the brown sand, so the colored image does not look as good as the grayscale.

Blended Image (No Color)

Blended Image (Color)


Selena Gomez With Jonah Hill's Face: Colored

This colored image turned out very nicely. The two people have similar skin tones, so it doesn't look too out of place when we mash their faces together. The multiresolution blending helped ease the transition between the two images, resulting in an output that looks almost like a real person.

Blended Image (No Color)

Blended Image (Color)



Gradient Domain Fusion

In this part of the project, we will use gradients to seamlessly blend an object or texture from a source image into a target image. Our eyes often care much more about the gradient of an image than the overall intensity, so we find values for the target pixels that maximally preserve the gradient of the source region without changing any of the background pixels. Note that we are making a deliberate decision to ignore the overall intensity: a green hat could turn red, but it will still look like a hat.

Toy Example

In this toy example, we will use the x and y gradients from an image s, plus one pixel intensity, to reconstruct an image. This is just to help us understand how to formulate and compute least squares equations using gradients. The first step is to write the objective function as a set of least squares constraints in the standard matrix form: (Av-b)^2. Here, "A" is a set of equations to find x and y gradients, "v" is a vector of output variables, and "b" is a known vector consisting of information from the original image s. Once we solve this equation for v, we can show the reconstructed image and see that it is virtually identical to the original:

Original image

Reconstructed image using gradients

To understand how closely the images match, we can compute the error (difference in pixels) between the two images. For this specific image, the error comes out to be: 0.030511221447. Because the error is near-zero, we know that the image was successfully reconstructed.

Poisson Blending

To actually blend images together, we will use a technique called 'Poisson blending' (detailed here). Below, we will show the equation we are using. Given the pixel intensities of the source "s" and of the target "t", we want to solve a least squares equation for new intensity values "v" within the source region "S". Each "i" is a pixel in the source region "S", and each "j" is a 4-neighbor of "i". The first summation guides the gradient values to match the source region, and the second summation deals with pixels on the boundary.

Below are some examples of Poisson blending. We will compare the Poisson-blended images with the naive copy-and-paste method to see just how well the algorithm worked. It is a very powerful technique when used well, and results in some pretty cool pictures!


Laying Out Into the Abyss

Source image

Target image


Naive copy-and-paste blend

Poisson blend


An Otter's Day Off

Source image

Target image


Naive copy-and-paste blend

Poisson blend


See ya, Kim!

Source image

Target image


Naive copy-and-paste blend

Poisson blend


An Unfortunate Game of Catch: Failure Case

Like most things, Poisson blending isn't a universal solution. In this particular case, we can see that the fish toy in the output image is all grayed-out and not well blended. This is due to the difference of color between the source and the target image: the fish toy is bright pink on a white background, and it is overlayed on a green patch of grass. As Poisson blending only looks at gradients, the side-effect is that drastic color differences between the source and the target cause wonky result colors. If we want good results, we should make sure that the source and target generally match in terms of color.

Source image

Target image


Naive copy-and-paste blend

Poisson blend


A Comparison of Blending Techniques

In this project, we have explored two different ways to blend images: using gradients to do Poisson blending and using Gaussian/Laplacian pyramids to do multi-resolution blending. So which one is better? Here, we will showcase the two techniques on the same image and evaluate them. The naive copy-and-paste method will be displayed as as baseline.

Olaf on the Beach: Revisited

Source image

Target image

Naive copy-and-paste blend


Multiresolution blend

Poisson blend

Here, we took 'Olaf on the Beach' from the multiresolution blending section and applied Poisson blending to the same images. From the results, it is obvious that Poisson blending works much better in this case. The snow from Olaf's snow angel matches the sand and blends almost seamlessly, as opposed to sticking out. This is because Poisson blending adjusts the snow to match the gradient of the sand, creating a cohesive seam. In the case of multi-resolution blending, all we do is try to smooth out the edges of Olaf's cutout and place no emphasis on actually matching the two images. This results in a smooth transition, but lacks the realism.

So when would you use one technique or another? Multiresolution blending only produces good results when the backgrounds of the source and target image are very similar in color and texture. Otherwise, the transition between the two images will be abrupt to the viewer because the gradients and colors don't match at all. Poisson blending is useful for a much larger variety of images. It works especially well when we don't care about the color of the source image, because the source color often gets changed through the process. Obviously, Poisson blending also works well when the images are close in color, but it produces decent results even when the colors differ. As long as there isn't a drastic change (like from pink to green in our failure case), the blend will be quite nice.