**CS294-26 Project 5: Stitching Photo Mosaics** By Neerja Thakkar Part A: Image Warping and Mosaicing =============================================================================== Shoot the Pictures ----------------- The first step is to shoot some pictures that will be stitched into mosaics! I used an iPhone camera app that allowed me to lock exposure, white balance and aperture. These photos were shot in (the very cold) Minnesota, where I am living right now. ![](river1.JPG width=300) ![](river2.JPG width=300) ![](house1.JPG width=300) ![](house2.JPG width=300) ![](livingroom1.JPG width=300) ![](livingroom2.JPG width=300) Also, I did not take these photos of Mount Rainier, but I tested my method on them: ![](Rainier1.png width=300) ![](Rainier2.png width=300) Recover Homographies ----------------- In order to recover homographies, I first select 4-8 pairs of corresponding points on each image. We can set up a system of equations \$Ah=b\$, where \$h\$ contains the 8 unknown values in the homography matrix \$H\$, and \$a\$ and \$b\$ are formed from the corresponding points. This system is solved with least-squares. Warp the Images ----------------- Once we have computed the homography between two images, we can warp from the first to the second image. I used inverse warping for this part, very similarly to in project 3. Image Rectification ------------------- To test that solving for homographies and warping images works, we rectify some images. I took images of a painting and book that have a rectangular shape, defined a square/rectangle shape to warp them into, and then let my code turn them into perfect rectangles! ![input](IMG_2187.jpg width=200) ![rectified](out/rect_1.png width=300) ![input](IMG_2186.jpg width=200) ![rectified](out/rect_2.png width=300) Blend the images into a mosaic --------------------------- In order to blend images into a mosaic, I warped one image to align with the other, and then blended them using a Laplacian stack. Here are my results: ![Mount Rainier](out/blended_rainier.png width=600) ![House](out/blended_house.png width=600) ![River](out/blended_river.png width=600) Reflections ------------ I learned that the selected points really matter! At first, it seemed like my implementation was wrong, but I realized that I was just not clicking on points carefully enough. Part B: Feature Matching for Autostitching =============================================================================== In the second part of this project, we find correspondences automatically, and then use them to compute a robust homography estimate. Detecting Corners ------------- First, we use Harris Corner Detection to find "corners" in the image, or parts of the image that change a lot in every direction. This results in a lot of points being detected: ![Rainier Harris corners](out_partB/Rainier1_harris_corners.png width=600) ![House Harris corners](out_partB/house1_harris_corners.png width=600) Adaptive Non-Maximal Suppression ----------------------- Since simply using Harris Corner Detection results in a huge amount of corners, we want to reduce the number. However, we want to do so in a clever way that keeps the strongest corners, and also distributes the corners throughout the image. Therefore, we use Adaptive Non-Maximal Suppression to select the best 500 corners. ![Rainier ANMS corners](out_partB/Rainier1_ANMS_corners.png width=600) ![House ANMS corners](out_partB/house1_ANMS_corners.png width=600) Feature Descriptor Extraction ----------------------- Next, we extract an 8x8 axis-aligned patch centered at each corner. First, we find a 40x40 window at each corner. Then, this window is blurred with a Gaussian kernel. Finally, every 5th pixel is sampled from this blurred 40x40 window to result in an 8x8 patch. Feature Matching ---------------- Once all of the feature descriptors are extracted from two images, it's time to match them to each other! For this, we use SSD to measure the similarity between two patches. Then, for each patch in image A, we find the closest and second closest patches in image B. Then, we look at the ratio between the first and second closest patches. If this ratio is small (I used a threshold of 0.15), meaning the first neighbor is a much better match than the second neighbor, we consider this to be a match. RANSAC --------- In order to compute a robust homography estimate from the matched features, we use RANSAC. For each iteration of RANSAC, we randomly select four pairs of matched features, and use these to compute a homography candidate. Then, we apply this homography to all of the other points in image A, and test how many of these points are inliers, where the transformed points are close to their corresponding matches in image B. We choose the homography candidate that yields the largest set of inliers. Results -------- Here are my final results using automatic stitching! ![Mount Rainier, automatically stitched](out_partB/blended_rainier2.png width=600) ![Mount Rainier, with manual keypoint selection](out/blended_rainier.png width=600) ![House, automatically stitched](out_partB/blended_house.png width=600) ![House, with manual keypoint selection](out/blended_house.png width=600) ![River, automatically stitched](out_partB/blended_river.png width=600) ![River, with manual keypoint selection](out/blended_river.png width=600) ![Living room, automatically stitched](out_partB/blended_livingroom.png width=600) Reflections ------------ I was pleasantly surprised by how well the automatic feature extraction and matching used, with the help of a few cool tricks. Every component of the pipeline we implemented made a lot of sense, and it is very fascinating how things such as ANMS, throwing away features that don't have a small enough first to second nearest neighbor ratio, and RANSAC come together to give us a really good homography estimate.