The first part of this project uses frequency-based methods to enhance and blend images.
In this part I implemented an image sharpening technique using the "unsharp mask", which is a high-pass filtering technique obtained by subtracting a low-pass filter from an identity filter. This sharpens the image by accentuating small details in the image, which represent high frequency content.
In this part, I implemented hybrid images as described in a 2006 paper by blending low frequencies from one image with high frequencies from another. This relies on the insight that perception is dominated by high frequency signal when possible, but only low frequencies are available when the image is small or far away. This insight can be used to create images which are perceived differently at different viewing distances.
The main effect I observed of adding color was that the high frequencies stood out significantly more. Below is a side by side comparison of color vs greyscale hybrid images. My guess is that color provides additional information, and perhaps our perception of fine detail benefits a lot from increased information in the image, and our perception of course detail does not.
I implemented Gaussian and Laplacian stacks to more finely examine images at various spatial frequencies.
I tried out a different optical illusion to see if the same principle would apply. This illusion has less low-frequency information (solid color patches) to support it and relies mostly on edges and object shapes. I found that while the Gaussian stack still works, the illusion begins to fade as we go deeper in the stack. This is in contrast to Lincoln who stays quite visible as we keep low-passing the image. I'm guessing this is because, since hard edges have signal in all frequencies, the illusion begins to lose strength as we low-pass the image, whereas Lincoln is strong particularly in low frequencies.
The laplacian stack helps to confirm my guess that the scenery is mostly lost in the first few low-pass filtering steps.
In this part, I used an alpha channel with varying smoothness of transition, blending the low frequencies more strongly than the high frequencies. This reduces ghosting of fine details while also avoiding hard seams between colors and coarse details.
For all of these examples, I included a baseline analysis with a hard seam (using the original mask) and with uniform blending; i.e. blending all frequencies with the same smoothed mask. Interestingly, the uniform blending performed just as well or even slightly better in all examples. I may have chosen my examples poorly but I expected uniform blending to either exhibit more ghosting, or have harder seams.
In this part, we switch from the frequency domain to the gradient domain. We spend most of our time looking at a technique called Poisson blending, in which a least squares solver is used to find pixel values which smoothly blend the constant value of a patch while preserving the relevant features in the patch.
This example worked pretty well, though a slight seam occurred. Likely this would be resolved by mixed gradients; the source image doesn't account for the slight gradient in the blue of the target image.
Again, a slight seam due to the nonzero gradient in the target image.
This one didn't one too well because the source image content didn't have enough buffer on the sides, and so part of the source image's edge contained the glowing light pattern and not the black background. The least squares solver sees a black target image and a white/blue source image, and tries to distribute the error throughout by blacking out parts of the light pattern. It did lead to a cool effect, as if it was not a glowing light but rather an ancient carving or painting on the front of the notebook.
I tried the Poisson blend against one of the examples from the frequency domain section. The frequency domain technique worked better, though I imagine that if I had used mixed gradient blending, the output image would have picked up on the hand texture and actually done better.