CS194-26 Project 6 - Autostitching Photo Mosaics

Eilam Levitov - cs194-26-acx
This notebook runs on python 2.7

Project 6B - Feature Matching for Autostitching

In this part of the project, we create a system for automatically stitching images into a mosaic. In order to do so, we follow these steps:

1. Detecting corner features in an image 
2. Extracting a Feature Descriptor for each feature point 
3. Matching these feature descriptors between two images 
4. Use a robust method (RANSAC) to compute a homography 
5. Warp images using the automatically generated homography
6. Blend images and output a mosaic!
In [2]:
# Load initial 2 images and display

Getting Corners and Corner Intensites

In the first part of the project, I used Prof. Efros's code (harris.py) to retrieve the corners and corner intensitiy of the image.

We learned in class how to detect corners using the eigenvalues of the Harris matrix, and were given an implementation of the harris corner detector algorithm.

In [3]:
# Display corners

Adaptive Non-Maximal Suppression (ANMS)

ANMS is a novel method which tries to generate a more spread out (increased variance) of the corners' disribution. In a nutshell, we look for the corners with the largest radii, where radius is defined as the distance from a corner to the nearest corner of higher intensity.

A more spread out distribution of corners (presumbly) allows us to get more accurate matches in the latter part of the process.

In [4]:
#Display suppressed corners
CPU times: user 967 ms, sys: 32.8 ms, total: 999 ms
Wall time: 1 s
CPU times: user 961 ms, sys: 14.4 ms, total: 975 ms
Wall time: 997 ms

Feature Descriptor

The Feature Descriptor allows us to perform a reliable and efficient matching of features across images. A descriptor is generated to all our points of interest by taking a 40x40 neighborhood around each point and resizing it to 8x8.

In [36]:
# Display (sample) Feature Descriptor 

Feature Matching

Now we use the Feature Descriptors we extracted from the iamge in the a previous part to find a geometrically consistent feature matches between our images. We do this by comparing each descriptor in the first image to all other descriptors in the second image with hopes to find a match with minimal error, and continue so with all of the first image feature descriptors.

In [47]:
# Display (sample) Matches 

Random sample consensus (RANSAC)

RANSAC is an iterative method to estimate inliners. Using RANSAC, we randomly select 4 points and generated a homography, we then apply the homography and calculate the loss of this specific homography. Finally, we select the points which resulted in the smallest error according to our loss fucntion.

We use RANSAC to provide a final layer of accuracy to our stitching, since feature matching can be ambiguous when corners are highly correlated or proximate.

Warping and Bending

As we have done plenty of times by now, in this part I the finalized homograpy to warp the first image into the second's perspective, and then use a linear blending method to combine them (hopefully) seamlessly.

In [49]:
# Warping images
In [52]:
# Blending images
In [54]:
# Display mosaic

More Examples

From my Iceland trip during the summer. These are taking at laugavegur trail.

In [55]:
# Display individual images
In [58]:
# Display mosaic
In [57]:
# Display individual images
In [61]:
# Display mosaic