Project 3: Fun with Frequencies and Gradients

Part 1: Image Sharpening

We sharpened an image by taking and image, subtracting from it the image convolved with the Gaussian filter, scaling the resulting values by alpha, and then readding it onto the original image. Alpha was decided through testing, and 0.5 generally worked well. We used a Guassian kernel with sigma 4 and a kernel size double that.

Examples

Here are some results!

Oski

Blanket

Part 2: Hybrid Images

We create a hybrid image by selecting two images, one of which we put through a low-pass filter (by convolving it with a Gaussian kernel) and one of which we put through a high-pass filter (similar to part 1 of the project). We then align the images and overlay them to create a hybrid image.

Examples

Here are Derek and Nutmeg, with a highpass sigma of 60 and a lowpass sigma of 20.

Derek

Nutmeg

Derek and Nutmeg

Here are Michelle and a bunny, with a high and lowpass sigma of 10.

Michelle

Bunny

Michelle and Bunny

Here are a dish of spaghetti and a mop, with a high and lowpass sigma of 10.

Spaghetti

Mop

Spaghetti and mop

Here are a carrot and fish that didn't really work out.

Carrot

Fish

Carrot and fish

It probably didn't work out too well because the carrot is not inherently detailed, so when you blur it even more, the carrot just looks like an indistinguishable blob. The clean white background of the carrot picture also makes it harder for the fish to look "blended in" to the background.

Some additional analysis of our Michelle and bunny photos (photo and log Fourier transform):

Michelle:

Michelle

Michelle FT

Michelle (low pass filter):

Michelle low pass

Michele low pass FT

Bunny:

Bunny

Bunny FT

Bunny (high pass filter):

Bunny high pass

Bunny high pass FT

Michelle and Bunny:

Michelle and Bunny

Michelle and Bunny FT

Part 3: Gaussian and Laplacian Stacks

We create a Gaussian stack for an image by repeatedly convolving an image with the same Gaussian kernel. We create a Laplacian stack by taking the Gaussian stack of an image, and then subracting level i from level i-1 of the Gaussian stack for each level i. We then tack on the last image of the Gaussian stack at the end of the Laplacian stack.

Examples

For example, here are the Gaussian and Laplacian stacks of an apple image and Dali's Lincoln.

Original images:

Apple

Lincoln

Gaussian stacks:

Apple, Gaussian stack

Lincoln, Gaussian stack

Laplacian stack:

Apple, Laplacian stack

Lincoln, Laplacian stack

Part 4: Multiresolution Blending

To blend images, we have two images and a mask (typically just a binary image of 0 and 1 values, and by default an image that is just half black and half white). We take the Laplacian stacks of our two images and the Gaussian stack of the mask, and for each level i, we compute MA + (1-M)B where M is the mask at level i, A is the first image at level i, and B is the second image at level i. At the end, we just sum all our newly computed levels together to get our blended image.

Results

Apple and Orange

Apple

Orange

Apple and Orange

Michelle and Yoona

Michelle

Yoona

Michelle and Yoona

Butterfly and Moth

Butterfly

Moth

Butterfly and Moth

Michelle and Teapot

Michelle

Teapot

Michelle and Teapot

This image uses an irregular mask depicted below:

Mask

For example, for the Michelle and Yoona examples, here are the Laplacian stacks for Michelle and Yoona respectively, as well as the Gaussian stack for the mask:

Michelle, Laplacian Stack

Yoona, Laplacian Stack

Mask, Gaussian Stack

Part 5: Toy Problem

We try to reconstruct an image by solving for pixel values such that they match the x- and y- gradients in the original image, and such that the pixels in the upper left corner in the original and reconstructed image have the same brighness value. We create a system of equations and solve with a least squares solver to get the reconstructed image pixel values.

Original image:

Toy original

Reconstructed image:

Toy reconstructed

Part 6: Poisson Blending

For the last part of the project, we try to blend two images: a source image and a target image. Our goal is to extract a snippet of the source image and "paste" it onto our target image, and have the resulting image look "natural" - e.g. without any seams. We do this by noting that viewers typically don't care about absolute brightness of pixels in an image, but rather care about gradients - how bright pixels are relative to their neighbors. Therefore, when we paste in our source image, we try to maintain the same gradients within the source image as well as ensuring the borders between the source and target images have the same gradients as the original source image. We do this by finding the new values v to replace the source image such that:

Equation

where S is our "mask" or the locations we are trying to replace in our target image, v_i is a new pixel value within this mask, N_i are the neighbors of V_i, d_ij are the delta between pixels i and j in the source image, and t_j is the pixel brightness of pixel j in the target image. We define neighbors as pixels within the 5 by 5 pixels around the given pixel.

Examples

Target image:

Chili cheese fries

Source image:

Mac

Directly copying source into target:

Chili mac fries, orig

Poisson blending:

Chili mac fries

We defined the neighboring region around a pixel as pixels in the 5 x 5 box around a pixel (rather than the 3 x 3 box) in order to get a smoother effect.

Next example,

Porridge

Kids

Porridge kids

Another,

Poptart

Tyrion

Poptart Tyrion

An example that didn't work,

Lawn

Acorn

Acorn in lawn

This didn't work well because the background image had more texture than the source image so the grass surrounding the acorn in the source image wouldn't "pick up" the grassy texture of the background image so it looks unnatural.

Compared to the traditional blending image, here is an example of using poisson blending on Michelle and Yoona:

Michelle and Yoona poisson

Compared to blending from before:

Michelle and Yoona blended

The traditional blending seems to work better for this example because it has more "blending" in it (e.g. the borders in the traditional blending cover a larger region compared to Poisson blending which only considers the 1-3 pixels closes to the border). Poisson blending seems to work better for source images that that have a simple background pasted onto a target image with a "simply colored and similarly colored region", compared to the traditionally blended image (from previous parts) which will weigh each image equally.