CS 194-26 Project 4B

Detecting Corner Features in an Image

The features we are interested in are "corners" in the image, or Harris points, which we find using the Harris Interest Point Detector. This returns way too many points, so we use Adaptive Non-Maximal Suppression to single out the points that have the strongest corner strength and are spaced out amongst the image.

ANMS on the two images I'll be using for the rest of the project

Feature Descriptor Extraction

Now for each ANMS point, we need to find its feature descriptor. To do so, we take a grayscaled 40x40 patch around each point, blur it with a Gaussian filter, downsize it to an 8x8 patch, and then normalize it.

Feature Descriptors Matching

Then, we compare the feature descriptors of one image to the other image's and see if we can find a good match in similarity. Thresholding is determined by Lowe, such that if the difference between the patch to the best match divided by the second best match is not below a threshold, then the match should not be considered. This basically means that matches must be pretty darn good compared to the others or they won't be considered. At first, I used the threshold of 0.66 referencing the graph in the MOPS paper, but then found that 0.5 had better results in removing incorrect points on the bystanders in my image.

What I've Learned

This project is super interesting and gives great insight as to how more advanced panoramic operations are performed. It is very exciting to understand the underlying operations that match features between images and makes me appreciate speed and accuracy of the panoramic settings on phones much more.