Project 4B: FEATURE MATCHING for AUTOSTITCHING

CS194-26 Intro to Computer Vision and Computational Photography | Jingyi Zhou

Overview

This is the second part of the panorama stitching project where I created a system for automatically stitching images into a mosaic.

Detecting corner features in an image

Here we used Harris corner detector as mentioned during the lectures. The idea is that if a window made small shifts in different directions, the avg changes of its intensity would be large in all directions if it contains a corner. Below are the points retrieved using the provided code get_harris_corners:

In order to get the points to be more evenly distributed, Adaptive Non-Maximal Suppression is applied:

Extracting a Feature Descriptor for each feature point

Below are the feature descriptors sampled from my lewis, evans, and workstation images:

Matching these feature descriptors between two images

Here I implemented the feature matcher by using computing simple SSDs and applying a threshold of 0.4.

Use a robust method (RANSAC) to compute a homography

Here I implemented the ransac algorithm to find the best homography:

Proceed as in the first part to produce a mosaic

Below are the resulting panoramas stitched automaticly (left), and manually (right):

What I've learned

The automatic alignment saved me so much pain and made the panorama stitching process a lot more scalable in real life. The indexing of arrays (especially with the provided harris function having x and y flipped) really took me a long time, yet in the end the learning experience of implementing a research paper was very enjoyable!