I am not too sure why these images are in such low quality state, from the Jupyter Notebook it looks fine. I think it is caused by the figure size I am storing, but I am storing it to the same size. I am very sorry about this, due to the low quality the edges and blends could be very unobvious.
For this part we simply had to take picture of pictures we wanted to mosaic together. Here are some sample images
In order to recover the homographies, we had to first choose the correspondences needed for each image to reference, in specific we need at least 4 points per image. After that we can use least squares to figure out the homography. First, we set up the homography equation like the following.
We can further expand and write it in the form of:
Next, we apply the least squares to the equation. Like the following
To be noted, this would yield a result of [wx, wy, w]. So when we want to use the result we would have to
Next step is to apply the homography onto our images to create an adjusted image. An example is shown below.
For this portion, we turned tilted images into straight ones!
Finally, we can blend the images together! Yet, when we directly combine the images together, it will create a sharp edge, which is unideal. To counter this, we will implement a blending method. We create a mask in order to do this. A sample mask is shown below.
Finally we can show other results of the blend!
We first have to detect the corners of the image. This was do-able through the function defined in the starter code (get_harris_corners).
To reproduce the results of the paper, I added a threshold condition in the peak_local_max.
This threshold is relative to the maximum
corner value, and it adjusted based on each image.
We can see that there are way too many corners identified by the Harris Corner Detection. We will use ANMS to suppress the amount of corners detected.
The main logic behind this algorithm is that we define a radius and take the local maximum in that radius. The way we define the radius is provided below.
Then we can see that the results are stellar!
We can see that even with ANMS there are still too many points for us to use! First step that the paper takes is extracting features and matching them.
Feature Extraction:
1. For each interest points take a 40x40 patch
2. Then for each patch resize them to 8x8 patches
3. Normalize each patch, with the formula provided in lecture
Feature Matching:
1. For each patch on the left image calculate the SSD for each patch on the right image
2. Sort each patch on the right image by the SSD and take the two nearest-neighbor
3. If the 1-NN/2-NN is below the threshold (which I set to 0.6) then it's a match and we keep it.
We can now see that we have eliminated a lot of points!
We can still see that there are some points that don't match each other! In order to make sure that each point correspond to another point on the other image.
We will use RANSAC. There a few main steps to conduct RANSAC, to be noted I repeated the first 4 steps 10000 times:
1. Randomly sample 4 pairs
2. Compute the Homography matrix using these 4 pairs
3. Use this Homography matrix and apply it to all points in left image and then calculate the SSD between the warped and right image points
4. If SSD is lower than threshold then we keep it, and call this inliers
5. Use the Homography matrix that created the most inliers
6. Apply the final Homography matrix and only keep the points that have SSD lower than threshold
Finally, we can now use our results and produce mosaics like in part A. We follow the exact same process. To be noted, these images are cropped as well.
Learnt a lot from this project, especially seeing how points can be automatically found and they are pretty good as well!