CS194-26: Project 4B - Sean Chen

Harris Corner Detector

I applied the harris corner detector given to us. I found it best to only keep the best points, so the top 500 out of ~300,000 points were taken for every image. Below are the points on the images

Adaptive Non-Maximal Suppression

ANMS was applied as in described in the paper, with the suggested cutoff value of c=.9*harris of the query point. Of the 500 points used from harris, 100 points of the greatest radius were kept. The number of points was pretty large to ensure that the key features were mostly present.

Feature Definition and Matching

As directed in the MOPS paper, the features were defined by taking a 40x40 square around the feature points, and down-sampling to 8x8, with gaussian blur. From testing, it seemed that using the grayscale version of the image sufficed. The first-stage matching was done exclusively using lowes. The matches were both using image1 choose image2 and image2 choose image1, and the lowes values of the matches were stored. The matches were cutoff based on lowes value. Then the intersection of the matches between the two cycles was taken to be the first-stage matching solutions. Below are the points after lowes feature matching

RANSAC

RANSAC was run on the first-stage feature matches to further remove outliers. In this case, RANSAC did not do much, as the first stage results were good.

Mosaic Blending

The homography was recalculated from the points suggested by RANSAC, and a mosaic was made in the same manner as in project 4A. Below are comparisons of automated and manual mosaics.

ransac simple average

What I Learned

It was interesting to see these heuristic and manually tuned methods produce such good results. It gives some intuition about the structure that neural networks learn when they do the same task.