Project 4B: Feature Matching for Autostitching

Justin Chen

Overview

The primary goal of this project is to explore how we can exploit image transformations to correct and warp between different perspectives of the same object/scene. Starting with separate images of the same scene from different viewing angles, we can use a variety of feature detection, selection, and matching methods to auto-stitch a panorama.

Shooting the Pictures

Before we can do anything, we need pictures that are relatively consistent in lighting, content, and camera position (although viewing directions vary). Here are raw pictures that I used for the panoramas.

Desk
Desk1
Desk2
Desk3
Kitchen
Kitchen1
Kitchen2
Kitchen3
Top Dog
TopDog1
TopDog2
TopDog3

Detecting Corner Features in an Image

For this part, I just took the strater Harris Corners code and applied it to my input images to get potential corners/features to use for matching.

Desk
Desk1Corners
Desk2Corners
Desk3Corners
Kitchen
Kitchen1Corners
Kitchen2Corners
Kitchen3Corners
Top Dog
TopDog1Corners
TopDog2Corners
TopDog3Corners

Adaptive Non-Maximal Suppression

This step gets rid of many of the unessecary and unhelpful corners/points for feature extraction and matching.

Desk
Desk1ANMS
Desk2ANMS
Desk3ANMS
Kitchen
Kitchen1ANMS
Kitchen2ANMS
Kitchen3ANMS
Top Dog
TopDog1ANMS
TopDog2ANMS
TopDog3ANMS

Feature Extraction & Matching

To extract features, we use normalized 8 x 8 downsampled patches as feature vectors. Since we want a 5 pixel spacing and 8 x 8 windows, we take 20 x 20 patches and downsample by a factor of 5. Then, we compute the euclidean distance of every feature vector pair and if the top 2 matches are close enough (determined by a parameter epsilon), then we consider those points to be a match.

Desk
Desk1MatchL
Desk2MatchL
Desk2MatchR
Desk3MatchR
Kitchen
Kitchen1MatchL
Kitchen2MatchL
Kitchen2MatchR
Kitchen3MatchR
Top Dog
TopDog1MatchL
TopDog2MatchL
TopDog2MatchR
TopDog3MatchR

Warping and Blending

Now that we have the matching points for each pair of images, we can warp and blend in the same manner as from Proj4A to create the final panaramas.

Desk
DeskL
DeskM
DeskR
DeskPano
Desk Auto-Stitched Panorama
Kitchen
KitchenL
KitchenM
KitchenR
KitchenPano
Kitchen Auto-Stitched Panorama
Top Dog
TopDogL
TopDogM
TopDogR
TopDogPano
Top Dog Auto-Stitched Panorama

What I Learned

I think that this project was very worthwhile and taught me a lot about the interpretation and manipulation of features in the context of images. Through methods such as Harris Corner Detection and RANSAC, we were able to systematically detect, extract, and match features across images. One thing that I struggled with during this project was tuning the parameters, specifically the different threshold values for feature matching and RANSAC. I discovered that the quality and consistency of photos matters a lot for this process, as for example my desk panorama had poorly taken and inconsistent pictures and that resulted in a very painful tuning and guess-and-check experience trying to get the right matching points.