FEATURE MATCHING for AUTOSTITCHING

CS194-26, Project 6b

Inan Husain

Detecting corners in images

This part was just done using the harris.py starter code.

Adaptive Non-Maximal Suppression (ANMS)

This part was done using the h values from the first part, and finding points with higher h values than its surrounding neighbors to narrow the harris points down to interesting non-corner points

Feature extraction

This part was done by going through each of the points outputed by ANMS and taking the surrounding 40 x 40 window to sample a 8 x 8 gaussian filter. This window was simply axis aligned without any rotation.

Feature Matching

This part was done by comparing the features extracted from the first image to all of them from the second, and pairing the ones with the lowest squared distance between them under some determined threshold (0.4 worked well for me).

RANSAC

This part was done by taking 4 random paired points from the last part, computing an H value, and seeing how many paired points could be counted as inliers to that H. This process was repeated several thousand times, and the inliers from the best H were returned to recompute H that was used for warping matrix. Inlier pairs from RANSAC are below, and the image mosaic come after.

Image Mosaics

This part works the same way except I averaged the inputs for cases where an output pixel mapped to both input images. I suspect my code is a little buggy since the results aren't great.

Automatic
Manual
<
Automatic
Manual
Automatic
Manual

What I learned

Its interesting to me how the automatic stitching managed to produced results that were better (second stitch), worse(first stitch), and the same (third stitch which failed both times in the same exact way)