I picked the original picture of my friend and used the sharpening technique described in class: filter extrat the high frequences of the image (threshold=sigma) and add it to the original image scaled by constant alpha.
Photo Credit: Mica.
I blended three pairs of images. First I low-pass filter one image and high-pass filter the second image, each with its corresponding threshold sigma. Then I add them together, each weighted with its corresponding alpha.
Failure case: the line of the volleyball crossed the hollow eye of the spiderman.
Following is the illustration of the frequencies of the images at different stages.
It seems that using color for the high-frequency component is much better than using color for the low-frequency component. The previous examples show the results when both components are colored.
I implemented Gaussian stack by successively applying Gaussian filter using threshold sigma to the previous image until reach level N (e.g. N=5). Laplacian stack is created by subtracting each Gaussian image by the image at the level immediately below it.
Lincoln (sigma=2, top: Gaussian, bottom: Laplacian) Apple and Earth (sigma=5, top: Gaussian, bottom: Laplacian)I first computed the Gaussian and Laplacian stacks of both input images. Then I created a mask, either manually or using python, which consists of only 1 and 0. Then I create Gaussian stack for the mask. For each level of Laplacian stacks, I mask the input images using the mask on the corresponding level and blend them together. Then I blend the lowest level of Gaussian stacks of the two input images. Then I sum up all the blended images, i.e. the blended Laplacian images and the most blurred Gaussian image.
Below are the Laplacian stacks that I computed, as well as the masked input images that created it.
Spiderman Volleyball Blended imageSee above.
Denote the width and height of the image as w and h, respectively. I first created a matrix A of size (w*h, w*h). Each row represents the function to use to calculate the gradient at that pixel. For the 4 corner pixels, its gradient is 2 * its intensity - the intensities of the two neighbors. For the pixels on the edges, their gradient is 3 * its intensity - the intensities of the three neighbors. For the inner pixels, the gradient is 4 * its intensity - the intensities of the four neightbors. The top left pixel retains its intensity, as suggested by the spec. In this way, I formulate a least square system and reconstructed the image.
In this part, we use poisson blending to blend a source image into a target image seamlessly.
I picked the original picture of my friend and used the sharpening technique described in class: filter extrat the high frequences of the image (threshold=sigma) and add it to the original image scaled by constant alpha.
More Results:
Failure case: The smiley face retained its yellow color because the original picture border was white (so poisson blending retained the difference between the color of the face and its surroundings). The leftmost penguin was added by poisson blending. The head of the penguin had "bleeding" effect because the target image has very different gradient from the source image around that area.
Contrast between multiresolution blending and poisson blending.
Neither looks good probably because the masks confused the poisson algorithm of the color.
Contrast between using source gradient and using mixed gradient.
Powered by w3.css