COMPSCI 194-26: Project 4

Varun Saran

4a: Image Warping and Mosaicing

Image Rectification

I used inverse warping to get the warped image, so the H matrix takes points from the final warped shape coordinates to the old unwarped shape coordinates.

To test my warping, I used a carpet (similar to the example seen in lecture), and warped it into a top-down view where the pattern is much more visible. I also did the night street example shown in the slides.

Angled view of carpet
Top down view of carpet
Initial night view of building
New perspective

Bluring images into Mosaics

Now that we can successfully warp images, the next step is to take multiple images from the same spot (with the camera slightly rotated), and then stitching these images together to form a mosaic. For the first set of images, I took these photos by the building near Lewis hall.

A naive overlapping of the images causes a strange coloration, so I gradually increased/decreased the alpha of 2 images to gradually blur from 1 image to the other. This makes the image coloration much better.

The bluring was the hardest part, and I ended up using a 1D blur (every pixel in the same column had the same alpha) so along x the image looks pretty clean. However, along y, along the top and bottom where only 1 image is present (most of that column is overlapping, but due to the warp the top or bottom only comes from 1 image), there are some awkward artifacts.

Original Left image
Original middle image
Original right image
Left remains unwarped
Middle warped to match left's coordinates
Right warped to match left + warped middle's coordinates
Naive Warp and Overlap
Warp and Average Weight

Another example, making a mosaic of a planar surface.

Original Left image
Original right image
Warp right image
Naive Warp and Overlap
Warp Left + Alpha blur
Warp Right + alpha blur
Final Warp and Weighted Average using alpha bluring

Another example, making a mosaic of a planar surface.

Original Left image
Original right image
Naive Warp and Overlap
Warp Left + Alpha blur
Warp Right + alpha blur
Final Warp and Weighted Average using alpha bluring

I tried taking these photos at night to see the difference, but it just ended up with the 2 photos having different lighting autmatically chosen by my iPhone. The bluring is still a lot better than a naive overlap, but a vertical line is still clearly visible.

tell us what you learned

I found the warping part to be really cool, as well as the mosaicing. For the homography section, I took the time to understand the derivation of the A matrix in Ax=b for least squares. I saw some resources online that kinda gave it away, but at first I couldn't figure out why it was structured that way so I started with the p' = H*p and really understood how the A matrix eventually got its structure (it had both an x and x') The alpha bluring took a lot of trial and error to get it to show 1 image with normal looking intensities instead of weird overlapping images or strange colorations or random transparent parts depending on how images overlap. I ended up just using a gradually increasing alpha along x when a new image was added to the mosiac, and that was merged with the old image and a gradually decreasing alpha. The end result is actually pretty good. I'm excited for the next part of the project, where we'll use automatic feature learnning, instead of having to manually click on features and figure out the corresponding pixel coordinates. Manual mosaicing required quite a bit of manual work to find corresponding points in the right coordinate systems, so the automatic detector will be really cool.






Project 4B: Feature matching and Autostitching

Detecting Corner Features in an Image

For this part, we use the Harris interest point detector to find corners of the image. Note that I did not do the Adaptive Non-Maximal Suppression part of this project. Here we see all the corners detected on 2 pictures of a door.

All Harris corners (with corner strength above a certian threshold)

Extracting and Matching Feature Descriptors

For each feature, we generate a feature descriptor according to section 4 of the paper. For feature matching, we use the Lowe threshold to see the ratio of the 1-NN and the 2-NN to find a strong matching descriptor. Finally, we use RANSAC to reject outliers. We choose 4 random features, and find the number of inliers. We repeat this numerous times to find which set of 4 has the most inliers. To get the best homography, we can then do least squares on the entire set of inliers to find the best transformation. Here is a comparison of the features matched in 2 images, after doing feature matching and outlier rejection. Each point on each image has a clear corresponding point on the other image. We also don't see any outliers, which is good because it means our transformation will be accurate.

image 1
image 2 with clear corresponding features

Image Mosaicing

Now we show a few image mosaics using automatic feature detection and stitching, with the same alpha blurring as in 4a.

image 1
image 2
Auto Mosaicing
Manual mosaic from part A

Now we look at a building at night. THe lighting seems to be a little off in the camera, so unfortunately there is a distinct line between the 2 images, even after alpha blurring.

image 1
image 2
Auto Mosaicing
Manual mosaic from part A

Finally, we mosaic a building. For the manual part, I stitched 3 images into 1. Instead of warping the leftmost and rightmost into the middle image, I first warped the middle into left, and then the right into the warped version of the middle. This worked well for the manual mosaicing. However, when I tried the same for automosaicing, it didnt work. This is because the warped Middle image has some rotation, so feature matching didnt work because my feature descriptors are not rotation invariant. So the same wwarping order didnt work. I have included an automosaic of just the middle image into the left image to show it looks similar to the manual mosaic. Additionally, for the automosaic, I also warped the left and right images into the middle image. THis allowed for all 3 images to be mosaiced together. However, this means the perspective is different from the 3 image manual mosaic.

image 1
image 2
image 3
Auto Mosaicing on images 1 and 2
Manual mosaic from part A (images 1,2, and 3)
Automosaic all 3 images (but new perspective from manual). No blurring
Auto-mosaic with a (bad) attempt at blurring

What I learned

I think the automatic feature detection and matching is really cool. Finding manual correspondences is not convenient and not scalable so I really liked the automatic aspect of it. It made testing really easy because I didn't have to manually find coordinates of matching points. I switched the order of warping for the last mosaic, and auto-mosaicing made that really easy. Making each auto-mosaic was so much faster than each manual mosaic. The previous projects have been really cool, but didn't feel too useful. This project seemed actually useful because feature detection, matching, and outlier rejection all feel like something that have real world uses.