By Saurav Shroff
Part 1.1
Dx:
Dy:
Edge Magnitudes:
Edge magnitudes are computed by taking the sqrt(dx^2 + dy^2), as if the magnitude is the hypotenuse of a vector with side lengths dx and dy
Threshold (0.26):
Part 1.2
The notable differences are that the measured edges are wider with lower average magnitude. This makes perfect sense as the shift from one color to another along any axis perpendicular to any edge happens more gradually in an image that is blurred with a gaussian filter.
Dx:
Dy:
Edge Magnitudes:
Threshold (0.065)
Part 1.3
Original:
rotated (-3.5 degrees):
rotated and cropped:
Hist angles for original image:
Hist angles for rotated image (-3.5 degrees):
Note: The x axis is radians, the y axis is count
Original:
Rotated (-5.5 degrees):
Rotated and cropped:
Hist original:
Hist rotated(-5.5 degrees):
Original:
Rotated (-8.5):
Rotated and cropped:
Original edge hist:
This one was a slight failure case because of the close up perspective of the subject. The algorithm essentially aligned the background lines of the image to be straight, but left the subject (the pen) tilted almost 3 degrees to the right.
Rotated edge hist:
Original:
Rotated (-4 degrees):
Rotated and Cropped:
Original Hist:
Rotated Hist:
This image is definitely a failing case because of the fact that the image orientation is not well correlated with edge orientation. The spiral shape is intended to look the way it does in the original image, but the algorithm chooses to rotate it in order to maximize horizontal and vertical edges.
Part 2.1
Adding extra high frequencies does make images noticeably sharper, especially when they aren't very high in resolution. It seems as though the unsharp mask inflates the apparent resolution of the image slightly, which doesn't matter much if the image is already high resolution.
When it comes to "undoing" blurring, or rather, bringing sharpness back into a non-sharp image, the filter performs poorly, bringing a slight bit of apparent sharpness back to blurred images but leaving them significantly worse (in terms of sharpness) than they initially were before blurring.
Original:
Un-sharp mask:
Blur:
Blur then Un-sharp Mask:
For this image (taken from an iPhone), it is worth noting that the filter makes the image appear sharper, but less pleasing to the eye. My guess is that this is a combination of two factors. Firstly the original image is both high res and sharp - sharpening is a bit unnecessary. Secondly I am sure that many people with extensive knowledge on the subject design post-processing algorithms in the phone to optimize a humans perception of the image. If a simple filter made an iPhone photo look better, it is likely that the developers would have already included that filter in the camera software.
Original:
Un-sharp Mask:
Blur:
Blur then Un-sharp Mask:
More examples:
Part 2.2
Example 1
Originals:
Hybrid:
Input0 log FFT:
Input1 log FFT:
Hybrid log FFT:
Example 2
Originals:
Hybrid:
This one is a slight failure because the faces point in different directions. As such, the image is just clearly a combination of two clearly visible images unless you are very very close or very very far.
Example 3
Originals:
Hybrid:
This case is a definite failure because the difference is mostly in color. The algorithm does well displaying different shapes such that they are only visible from either close or far, but when the images are of mostly the same shape but different color (noting that filtering won't change the color significantly), the image simply looks like an average of the two input images; in fact, when I computed the average of the two, I had a hard time differentiating the hybrid from the average.
Part 2.3
These are the Gaussian and Laplacian stacks (depth = 10) for the Dali painting. Note that the Laplacian stacks technically are missing the last element of the Gaussian but this implementation detail is corrected for in my implementation of blending (which actually uses the stack).
These are the Gaussian and Laplacian stacks (depth = 10) for my favorite hybrid image. Comparing the first to the last gives a good idea of the transformation which is easy to miss given how similar the facial expressions are.
Part 2.4
Base example (apple orange):
"Bad" Example:
I wanted to make one image of pictures I took at home. Ignoring the misalignment due to poor picture taking, this picture well demonstrates the concept of multiresolution blending (look at the parts where our skin lines up)
This one is also interesting, but demonstrates the functionality of a unique shape mask.
The mask here is a quarter-circle with radius 1250 pixels(imagine a circle with its center at the top left of the image). Note the smoothness of the blending between the wall and the hair, its almost impossible to tell unless you look closely.
Here are the Laplacian stacks of both images and the mask at all of its levels.