Image Warping and Mosaicing

CS 294-26 • Fall 2020 • Project 5

Shubha Jagannatha


Overview

In this assignment, I make a mosaic of multiple images by warping the images based on matching keypoints. In part A, I manually select these keypoints. In part B, I automate the process of selecting matching features. I also did 2 Bells and Whistles this time!


Part A




Part 1: Shoot and Digitize Pictures

Here are the images I use in this assignment. I have 3 different settings that I used for my mosaics: outside, in my living room, and in my friend's living room.




Part 2: Recover the Homographies

For this portion of the project, I used a set of mapped keypoints between two images (code taken from my project 3) to recover the transformation between the two point sets. In order to calculate the homography matrix H in p'=Hp, I set up a linear system of equations to solve for the unknowns using least squares with the equation in the form of Ah = b. In order to improve the accuracy of the calculated homography matrix, I made sure the matrix was overdetermined by providing more than the 4 points needed to calculate the unknowns. Lastly, I divided all entries by the value of the last (9th) calculated value to ensure that the scaling was just 1.

Here's the Ah = b setup I used for this calculation.




Part 3: Warp the Images + Rectified Images

Here are some images with the planar surfaces rectified.




Part 4: Blend Images into a Mosaic

Here are the mosaics I have. First, here are the mosaics without blending.





Here are the mosaics I've created. I used the seam blending code from my Project 2. All source images were shown in my Part 1.:





Learnings from Part A

Overall, this was an interesting assignment. I learned the most about different methods of trying to combine images and their various tradeoffs (using laplacian images vs. varying the alpha of the image).



Part B


Detecting Corner Features in an Image




Here are all of the initial detected corner features in my images. I utilized the Harris detection code provided with this project.



Implementing ANMS




Here are the same images with the corners after using Adaptive Non-Maximal Suppresion.


Extracting a Feature Descriptor For Each Feature Point




Here are some of the feature descriptors extracted from my first image.



Matching Feature Descriptors Between Two Images




After matching the feature descriptors, my points came down to these between the two images. As you can see, there are still some incorrect matches.



Use RANSAC to Compute the Homography




Lastly, I used RANSAC to get the best points out of the set of features. I allowed for a very small amount of error so I had very few matches.



Produce Mosaics




Here are the mosaics stiched with manually selected features on the left and automatically selected features on the right.

Learnings from Part B




In this part of the project, I really enjoyed trying to pick apart the paper to come up with the method to implement the algorithms necessary for automatic stitching. I also just really enjoyed learning about how effective RANSAC can be!

Bells and Whistles


Rotation Invariant Feature Descriptors




In order to implement rotation invariant feature descriptors, I simply determined the angle of the dominant gradient in the image and rotated all feature descriptors so that the dominant gradient direction matched before checking if two features matched.

Here are two non-matching feature descriptors before (top) and after (bottom) being rotated (and cropped) so that they have the same dominant gradient direction. This is just to demonstrate the rotation process. I found the dominant gradient by finding the angle of each pixel (like in project 2) and finding the average angle in the entire image.



Additionally, here are the images with rotation invariant feature descriptors used.



Panorama Recognition




Given a set of unordered images, some of which can create a panorama, I've written code to automatically find the best matches to create the panorama with. I do this by feeding in a folder with all of the images (unordered as pictured in the first image below), and using the first image as my image 1. With all of the other images, I do the whole process of finding harris corners, anms, feature matching, and RANSAC. Whichever image has the largest number of final points after computing RANSAC is taken to be the best match for the first image. I take note of the pairing, and repeat the same process for all of the remaining images.

Here is the unordered image folder I fed into my code:




And here are the ordered pairings (shown vertically) that my code produced:



Lastly, here are the automatically computed panoramas: