CS194-26 Proj6: Stitching

Brian Aronowitz: 3032201719, cs194-26-aeh

Part 1: Rectification

In part 1 one I rectify images. This involves finding the homography (a perspective transform), between two images. By specifying 3 corner points on the original image, then warping it to be a square, a homography can be found. This homography, when applied to the original image, gives you a result of seeing the object from a different perspective.

Original Pic
Rectified
Original Pic
Rectified

Part 2: Panoramas

In part 2 we extend the homography finding to stitching mosaics. By manually specifying points of interest in between photos, a perspective transformation can be found and photos can be warped into each other. For blending, I went with the incredibly lazy strategy of using np.maximum(im1, im2). I chose this strategy because I am lazy. It would come back to bite me in the ass.

Adobe panorama dataset:

Im1
Im2
Im3
Result

My roof results from manual stitching

Im1
Im2
Result

Marearts roof dataset from manual stitching

Im1
Im2
Im2
Result

Part 2: Autostitching

In part 2 we extend the warping techniques from Part 1, to instead of taking manual input for point correspondences, we use Harris corner detectors and SIFT features to find point correspondences.

Marearts roof dataset: autoalignment

Input 1
Input 2
Input 3

Adaptive non maximal supression on features

Below are the features that are found, from running harris corner detection, and adaptive non maximal suppression. Adaptive non maximal supression is an algorithm that works by taking in all the found points in the image, and taking the top n (in my case 500), features that satisfy a given constraint. The constraint involves maximizing the distances between features with different corner strengths. This creates an even spread of features over the image.

Im1
Im2

Full feature matching from features

In this part, I build sift-feature boxes around the top 500 harris features. These feature boxes are built by taking a 40x40 box around the corner point, then downsampling to 8x8 (the downsampling effectively taking the low frequencies). I then do a brute force search over all feature boxes to find boxes. I use the Lowe thresholding strategy of (F-NN1) / (F-NN2) > threshold to figure out if the match is actually good. My threshold is set to .5

Full feature match

RANSAC

To deal with the outliers that ruin everything, I implement RANSAC. The algorithm is conceptually simple, essentially loop a couple thousand times over all the points and compute Homographies, and see which set results in the most agreeing points. This gives a good set of corners.

Match with RANSAC discarding outliers

Results

Homographies are there computed from the RANSAC features, and the images are warped together.

Result (auto-stitched)
Result (manual)

Other autostitching results

RANSAC match
Result
RANSAC match
Result

Summary

This project was an interesting introduction to automatic feature detection and alignment. Homography solving was also interesting. I do wish I had spent more time on blending, but when you have senioritis, you have senioritis.