Project 4 - Brian Agustino

Part 1 - Shoot the Pictures

We would first take two pictures that are next to each other with overlap

bedroom1

image1.jpg

bedroom2

image2.jpg

Since these pictures are taken with the image dimensions of 1836 x 3264. We would first resize these images by a factor of 4 having them in dimensions 459 x 816. We would then use the ginput function to get the corresponding points between the two images.

Part 2 - Recover Homographies

We would first need to compute and find the Homography matrix H that transforms p to p' where p' = Hp.

Reference: https://towardsdatascience.com/estimating-a-homography-matrix-522c70ec4b2c

homography_1

From this following equation, we could first do the dot product of the H matrix and the original points.

In which we would then get the value of w or Za from the diagram above. We could then substitute w or Za with this value and rearrange the equations to get the following equation.

homography_2

As this equation is in the form of Ah = b

We can then apply the least-squares to this equation to estimate the h by using np.linalg.lstsq

Thus we can now get the Homography matrix H.

Part 3 - Warp the Images

Now we would like to warp the image by the homography matrix H. We would first find the bounding resulting image by applying the homography matrix H on the edges of the first image. We would then get the min and max of the width and height of the transformed image.

We would then take the inverse of the homography matrix and apply a translation to the homography matrix by the -minx and -miny. We would then get the indices of each point of the new image and apply cv2.remap to interpolate from the original image to the new warped image

image1

image1

warped_image1

warped_image1.jpg

Part 4 - Image Rectification

We would then test this warping on flat surfaces to rectify the surface. This is to test if we have the correct homography matrix and our warping is working well. We would first find the 4 points of the object or table and get 4 target points as a rectangle or square. We can then apply the warp on the source image to the target points

table1

Initial table image1

Rect_table1

Rectified table image1

table2

Initial table image1

Rect_table2

Rectified table image2

Part 5 - Blend the images into a mosaic

We could then blend these two images into a single panoramic image. We can do this my cropping and averaging them together

warped_image1

image2

blend

Part 4b - Feature Matching for Autostitching

In this part, we would like to automatically determine the correspondences between two images

bedroom1

bedroom1/jpg

bedroom2

bedroom2.jpg

Harris Corners

We would first be using the given get_harris_corner from Harris.py which would return the corner strength of each pixel of the image and the coordinates of the harris corners

all_pts_bedroom1

all_points_bedroom1.jpg

all_pts_bedroom2

all_points_bedroom2.jpg

Adaptive Non-Maximal Suppression (ANMS)

We would then apply the ANMS to filter out the harris corners. Where we would first sort the coordinates of the corners in descending order by the corner strength. Then, starting from a large radius for each of the coordinates, we would find the maximum within the radius and start decreasing the radius. We would then return the first 500 maximum coordinates of the harris corners

after_anms_bedroom1

after_anms_bedroom1.jpg

after_anms_bedroom2

after_anms_bedroom2.jpg

Feature Description

We would then want to further filter out these coordinates by using a feature description. We can get this feature description by taking a 40x40 image within the coordinate and rescaling it by 0.2 to have an 8x8 dimension feature description and normalizing it. We would get a list of feature descriptions of each coordinate for both images.

feature_descriptor_1

feature_descriptor_1.jpg

feature_descriptor_2

feature_descriptor_2.jpg

Feature Matching

We would then apply the feature matching by using Lowe's trick whereas we find the smallest and the second smallest SSD of the feature descriptions and take the ratio of the two and if they are less than a threshold of 0.3 we would take the corresponding coordinates. Since having only a small SSD does not guarantee that it is a good feature matching.

after_feature_matching_bedroom1

after_feature_matching_bedroom1.jpg

after_feature_matching_bedroom2

after_feature_matching_bedroom2.jpg

RANSAC

We would then further match these coordinates by using the RANSAC method. Where we would first randomly select 4 coordinates and compute the homography matrix of these 4 coordinates. We would then apply this homography matrix to all the coordinates and save the inlier coordinates that have the distance(p', Hp) less than a constant epsilon. We would then store the largest amount of inliers of these coordinates.

after_ransac_bedroom1

after_ransac_bedroom1.jpg

after_ransac_bedroom2

after_ransac_bedroom2.jpg

Living Room

living1

living1.jpg

living2

living2.jpg

After automatically finding corresponding points

final_living1

final_living1.jpg

final_living2

final_living2.jpg

Chair

chair1

chair1.jpg

chair2

chair2.jpg

After automatically finding corresponding points

final_chair1

final_chair1.jpg

final_chair2

final_chair2.jpg

Warping and Blending

After automatically finding the correspondence points of the images, we could then apply the warp and blending that we implemented in part 4a

Bedroom

bedroom1

bedroom1.jpg

bedroom2

bedroom2.jpg

blended_bedroom

blended_bedroom.jpg

Livingroom

living1

living1.jpg

living2

living2.jpg

blended_living

blended_living.jpg

Chair

chair1

chair1.jpg

chair2

chair2.jpg

blended_chair

blended_chair.jpg

What I learned

I learned a lot of things from this project. Such as how to implement the homography matrix, warping, and how to automatically find corresponding points. It's very interesting that there are many unique creative ways to further filter corner coordinates to get the best corresponding coordinates