image warping and mosaicing

Nadia Hyder

 

OVERVIEW

In this project, I explored different aspects of image warping, namely image mosaicking. I performed image warping by taking at least 2 photographs, registering, projective warping, resampling, and compositing them. A key step in performing mosaicing was computing homographies and using them to warp the images. Finally, I used corner detection and feature matching to perform automatic image stitching (as opposed to using selected matching points).

 

PART 1

 

RECOVERING HOMOGRAPHIES

Before warping images into alignment, I had to recover the parameters of transformation between pairs of images. This transformation is a homography, which relates the transformation between two planes (images are only related by a homography if they are viewing the same plane from a different angle). The homography matrix H is a 3x3 matrix with 8 degrees of freedom.

Given a point (x,y) in image 1, and its corresponding (x’, y’) point in image 2, we can find H using the following equation (attaining H using least squares):

 

CS194-26 Proj6 Madeline Wu

 

WARPING AND RECTIFICATION

Warping an image with H transforms the image into the desired perspective, as the homography matrix maps the source points to the desired points. I used inverse warping with linear interpolation to avoid aliasing during resampling. Finally, I was able to rectify images. I chose two sample images with planar surfaces (one square and one rectangular) to warp so the plane is frontal parallel. I used ginput to select 4 points in each image, and defined the corresponding (x’,y’) corners by hand to be square and rectangular, respectively.

 

Here are the results of rectification:

original

rectified

A close up of a decorated wall

Description automatically generated

A close up of a bowl

Description automatically generated

A screen door

Description automatically generated

A picture containing indoor, photo, sitting, computer

Description automatically generated

 

 

BLENDING INTO A MOSAIC

We now have the capabilities to take 2 images, warp them, and blend them to create an image mosaic. Where the two images overlap, I used weighted averaging at every pixel, and the intersection linearly goes from being most similar to the left image to most similar to the right image (by linearly increasing the alpha value). Here are the results:

 

Left image

Right image

A living room

Description automatically generated

A living room filled with furniture and a flat screen tv

Description automatically generated

 

 

Warped left image

Warped right image

Composite

A screen shot of a living room

Description automatically generated

A screen shot of a living room

Description automatically generated

A screen shot of a living room

Description automatically generated

Learning how to rectify images was my favorite part of this assignment because I’ve always wondered how it was performed in photo editors and document scanning apps like Scannable.

 

 

 

PART 2: AUTO-STITCHING

 

For the second part of the project, I used corner detection and feature matching to perform auto-stitching.

 

 

CORNER FEATURE DETECTION

 

To detect image corners, I used the provided Harris corner detection algorithm, using corner_peaks instead of peak_local_max. These are the results of corner detection:

 

A picture containing graphical user interface

Description automatically generated

Surface chart

Description automatically generated

 

 

ADAPTIVE NON-MAXIMAL SUPPRESSION

 

Using Adaptive Non-Maximal Suppression, I selected the “strongest” corners to reduce the number of harris points s to 500 using the following equation from this paper:

 

 For each point, I calculated the distance to all other points (with a corner strength <= 0.9 * corner strength of other points) and kept the points with the largest minimum suppression radius (ri). This means we have fewer clusters of points, and more evenly distributed points. Here are the harris points after applying ANMS:

 

A picture containing graphical user interface

Description automatically generated

A close up of a colorful wall

Description automatically generated

 

 

 

FEATURE MATCHING

 

Next, I extracted feature descriptors to match across images. For each point, I looked at a 40x40 window surrounding it, applied a Gaussian blur, then down-sampled it to be an 8x8 patch. The patches were normalized to have a mean of 0 and standard deviation of 1. Here are a few example patches:

 

A picture containing chart

Description automatically generated

Chart

Description automatically generated

A picture containing graphical user interface

Description automatically generated

Graphical user interface

Description automatically generated

 

 

Next, I matched these feature descriptors across the two images. As in the paper, for each feature descriptor in the first image, I computed the distance between it and every harris point in the second image. I calculated the ratio of the closest point to the second closest point (least distance). If the ratio was below 0.275 (used this ratio for better predictions, otherwise a point from the two different tables in the left and right image was matched), the features are a match. This gave pretty accurate results:  

 

A screen shot of a living room

Description automatically generated

A screen shot of a living room

Description automatically generated

 

 

RANSAC

 

Finally, to find the optimal homography between the two images, I implemented RANSAC (RANdom SAMple Consensus). Over 1000 iterations, the algorithm randomly selects 4 feature points from both images, computes the homography over those 4 points, counts the resulting inliers, and saves the longest list of inliers. The inliers are then used to compute the final homography matrix between the two images, into which they must be warped.

 

 Now, our auto-mosaicing algorithm is ready.

 

 

AUTO-MOSAICING

 

Below are a few outputs of my auto-mosaicing algorithm: warped left images, warped right images, and the blended mosaic. I used the same warping and blending techniques as in part 1.

 

 

A living room

Description automatically generated

A screen shot of a living room

Description automatically generated

A screen shot of a living room

Description automatically generated

A picture containing outdoor, building, person, sitting

Description automatically generated

A picture containing outdoor, building, road, riding

Description automatically generated

A picture containing outdoor, road, building, person

Description automatically generated

A picture containing indoor, living, room, television

Description automatically generated

A screen shot of a computer

Description automatically generated

A screen shot of a computer

Description automatically generated

 

 

 

LEARNINGS

 

I really enjoyed this project because I learned a lot of useful new concepts and algorithms: from image rectification to homographies to ANMS, feature matching, and RANSAC. It also gave me more confidence in my ability to understand and implement cutting-edge algorithms from research papers.