Project 4: Part 1 and Part2 - Joe Zou

1. Part 1: IMAGE WARPING and MOSAICING

1.1. Shoot the pictures

Balcony Images

Room Images

Frontdoor Images

1.2. Recover homographies

I first used the cpselect tool on pairs of the images to define corresponding points.

Next, I used the least squares method outlined here: https://towardsdatascience.com/estimating-a-homography-matrix-522c70ec4b2c to recover the 3x3 homography matrix using the defined corresponding points.

Since the image result from this section will be displayed in later parts, I've attached a code snippet of how I computed the homography matrices using the method outlined in the link towardsdatascience link above.

1.3. Warp the images and Image Rectification

defining four corners on bind image
defining four corners on haven image

For this task, I used the cpselect tool again, but just to define the 4 corners of a rectangular shape in one of the images.

Next, I defined a rectangle in the resulting canvas that corresponded to the 4 corners I had previously defined and used a homography matrix to project the pixels onto a different plane.

I reused a lot of code from project 3 in this part, especially helper functions to project pixels from one plane to another as defined by a homography matrix.

Here are some results of image rectification run on screenshots from the video game Valorant along with an image of a mural from google search.

original bind image
bind rectified image
original haven image
haven rectified image
original mural image
mural rectified image

1.4. Blend images into Mosiac

For this section, I used code from the earlier sections and then added on 2-band blending to form smooth transitions between images of the final mosaic.

I created alpha  masks for each image by creating a bwdist layer mask for each image centered about their transformed center points, and then thresholded 0-1 based on the minimum value(minimum distance) between all of the images to determine where the masks were cut off. This essentially created a mask that cut off at a line that was equidistant to the two image centers.

Finally, I reused pyramid blending code form a previous assignment to perform 2-band blending of the images into the final mosaic.

Below I display some results on 3 mosaics as well as exploration I did on blending techniques.

Balcony Images

Transformed Images before blending:

original and transformed center image
transformed left side image
transformed right side image

Different Blending Techniques

averaging
hard border

2 Band Blending

high frequency blend
low frequency blend
final combined image

Room

high frequency blend
low frequency blend
final result

Frontdoor

high frequency blend
low frequency blend
final result

1.5 Things I learned

First of all, the new iPhone cameras are insanely difficult to adjust so that the exposure is fixed. This can be seen by some of the clear color boundaries between blends of the mosaic. I unfortunately didn't have access to a non-iPhone camera for this project so I had to learn a harsh lesson there.

Overall, I felt this project was a cool demonstration of the practical applications to all of the projection/homography theory we had learned in class. I struggled with some parts of the actual implementation in code and had to figure out clever ways to create things like the alpha value mask using the bwdist command and some numpy manipulation.

2. Part 2: FEATURE MATCHING for AUTOSTITCHING

2.1 Detecting Corners

For corner detection, I first found Harris interest points using the provided helper function. Next, I ran Adaptive Non-Maximal Suppression to select the top 500 candidates of interest points.

Here are some visualizations of all the Harris interest points found

Here are some visualizations of top 500 candidates from Adaptive Non-Maximal Suppression

2.2 Feature Description

For each of the top 500 interest points, I extracted feature descriptors by sampling the 40x40 patch around the interest point and downsampling the values to an 8x8 matrix. Finally, the feature descriptors are all normalized to have a mean of 0 and variance of 1. Here are some of the results:

Balcony Feature Descriptions

img0_point0
img0_point1
img0_point2
img0_point3
img0_point4
img0_point5
img0_point6
img0_point7

2.3 Feature Matching

For feature matching, I calculated the norm of the difference between the feature descriptions of all interest points between 2 images. For every interest point in image 1, there is a corresponding best and second best interest point in image 2. I used a threshold of 0.5 on the ratio of best_diff/second_best_diff to determine which interest point pairings to keep. In addition, I also only kept interest points pairings if the two points were each other's most similar points. Here are the visualizations of results from feature matching:

Results for Balcony

matched points between image 0 and 1
matched points between image 1 and 2

Results for Room

matched points between image 0 and 1
matched points between image 1 and 2

Results for Frontdoor

matched points between image 0 and 1
matched points between image 1 and 2

2.4 RANSAC

After matching interest points, I ran the RANSAC algorithm to find the largest set of points in which their homographies agreed. I set the threshold for distance error to be 2 and only needed 20 iterations of RANSAC to find a large enough set of points(>30) with the same homography matrix. Here are some of the homography keypoints found through RANSAC:

Balcony

ransac image 0 and 1
ransac image 1 and 2

Room

ransac image 0 and 1
ransac image 1 and 2

Frontdoor

ransac image 0 and 1
ransac image 1 and 2

2.5 Mosaic

Finally, I reused code from part 1 to generate mosaics. The homographies are found using the corresponding points from RANSAC and the same 2-band blending method is used again. I won't display results for 2-band blending since I've already shown it in part 1, but here are results from both manual(part 1) and automatic(part 2) stitching.

Balcony

manual stitching
automatic stitching

Room

manual stitching
automatic stitching

Frontdoor

manual stitching
automatic stitching

2.6 Things I learned

I'll admit I didn't expect the automatic algorithm to work so well. I had expected there to be a lot of noise in detected corners which turned out to be true, but many of the methods implemented in this part such as feature matching and RANSAC were able to sift through the noisy corners and find very accurate corresponding points.