Programming Project #4A: Image Warping and Mosaics!

Niraek Jain-Sharma

Part 1: Shoot the Pictures

In this part of the project, I shot several pictures around my neighborhood (northside), and some in my apartment. Below are some of the pictures I took!

House 1/2

Ladder 1/2

Leaves 1/2

Part 2: Recover Homographies

The goal of this project was to recover the homographies from one image into the coordinate system of another. First, I used Gimp to hover over similar points in both images, wrote them down, and exported them to a csv and read them in using pandas. See below for an example of the correspondence points for one pair of images:



Now, let's describe how to solve for the homography matrix. Because we have more than 4 points in the labeling as shown above, we use least squares to solve for A, namely Min(||Ax-b||^2). We apply the following and get the homography matrices.



Part 3: Warp the Image

In this part, we use the homography matrix we calculated in the previous part, and warp our image onto the correspondence points of our second image. This will allow us to blend/merge the images once they have the same coordinate system. See below for an example of a warped image:

Image Original/warped

Part 4: Image Rectification

We can utilize the work done above to rectify images! The idea is that if there is a slanted image that takes pictures of things that are in reality specific shapes (e.g. taking a picture of tiles on the floor at an angle, but in reality are squares), then we can actually warp these image using correspondence points to their wanted shape. In the following cases, I chose correspondence points of the corners of the rectangles - both turned out nicely!

Elephant Slanted/Rectified

Table Slanted/Rectified

Part 5: Blend into a mosaic

This is the culmination of our previous efforts! Here we will be blending two images together by utilizing the warp function described above, and then overlapping the images with blending to make a bigger mosaic. First, we will show the naive method of blending, which is just adding them on top of each other.

House 1/2


Ladder 1/2


Leaves 1/2

As we can see above, the mosaics look pretty good, but with the naive blending, they have clear overlap lines. However, this is useful for us to see where the overlap actually lies!

Finally, see below for the blended versions of the mosaics. I set an alpha channel, and put 1 in the center, and did a linspace downward from there radiating to the edges of the picture. As you can see, it does pretty well, the lines are gone!

House Blended
Ladder Blended
Leaves Blended

Project 4B: Feature Matching for Autostitching

Detecting corner features in an image

In this part of the project, we work on automatically generating correspondence points so that we don't need to do it by hand. This is a huge time saver, because we can then automatically stitch together mosaics without any human work! First, we generate all the harris corners using the given code. We then implement the Adaptive Non-Maximal Suppression algorithm to select a subset of the harris corners, which are spatially distributed all around the picture. To do this, we first calculate the suppression radius of each point, call it p_1. This is defined as the distance from p_1 to the closest point that has an h value of greater than 1/C_r times the h value of p_1. The purpose of this algorithm is to generate a good spatially distribution subset of the Harris points. Below is all the harris points, and next to it is the points after running ANMS, and selecting the top 500 points.


Extracting a Feature Descriptor for each feature point

In this part, we extract 8x8 grids of points, represented in grayscale, from each other 500 points that we got from ANMS in the last part. To do this, we word take a 40x40 grid around each point. Then, we blur it using a gaussian blur, and downsample it to get an 8x8 grid. Finally, we normalize these grids by subtracting the mean and dividing by the standard deviation. Below is an example of a specific point, colored in red, as well as the grayscale 8x8 feature descriptor associated with that point.


See below for a few other examples of the 8x8 grid downsamples.


Matching these feature descriptors between two images

In this part, we finally match feature descriptors from one image to feature descriptors from another. First, we run the above ANMS on two images to extract the top 500 points on each. Then, for each descriptor in image 1, we calculate the SSD between it and all other features. We save the minimum SSD out of all the SSDs calculated as 1-NN, and the 2nd minimum as 2-NN. If the ratio of 1-NN/2-NN is less than 0.1, namely meaning that 2-NN is more than 10 times greater than 1-NN, then we keep the point, otherwise we reject it. The motivation behind this is to say that if the closest point is *so much better* than the next closest, it's likely a proper match. See below for the result of this algorithm, cutting our points down for this sample image to just 10.


Use a robust method (RANSAC) to compute a homography

As we can see from above, our algorithm actually does extremely well on this pair of images. To the naked eye, all of the points look like they match the respective area in the other image. However, the still might not be exact, as small changes in point locations can immensely change the homography. Moreover, for other images, there might be even more stark outliers. So, we implement RANSAC. What the algorithm does is it chooses a random subset of 4 points from our set of correpondence points. Then, it computes the exact homography from image 1 to image 2 using these points, and applies the homography to all of the rest of the points in image 1 to see how many of those points match the points in image 2. "Match" means within some epsilon distance away, in our case we used a Manhattan distance of 1. Applying RANSAC with 1000 iterations, we find that we are left with only 7 of the 10 points!


Manual Stitch of mosaic vs Autostich

We now can autostitch the mosaics of all of our 3 pairs of images used in the first project, and compare them. We compare the naive stitching without blending below. We can see the homographies are much cleaner and less blurry than the manual versions! Note that the second and last mosaic comparison, both are slightly blurry, likely because the leaves move in the wind.

Manual vs. Automatic Houses

Manual vs. Automatic Garden

Manual vs. Automatic Leaves