CS 194-26 Proj 6

Preetham Gujjula (cs194-26-adt)

Overview

In this project, we simulate panorama style images by taking multiple images from the same vantage point and different angles. We warp the images to the same perspective using homographies and stitch them together to form our panoramic image.

Shooting and Digitizing

I took a few sets of photographs.

Rectification 1: A painting

Rectification 2: My laptop on a bed

Mosaic 1: The view outside my room

View 1

View 2

Mosaic 2: A dirty kitchen

View 1

View 2

View 3

Mosaic 3: A foreboding pantry

View 1

View 2

View 3

Part A: Manual Correspondence Points

Overview

In this part, we specify points of interest manually to perform image rectification, and to warp images into a panorama.

Recover Homographies

For the images I rectified, I recorded the points around the subject, and associated them with 4 points in a rectangle to create a set of 4 correspondences. For the mosaic images, I created about 10 - 12 pairs of correspondences between the images. For the mosaics with 3 images, I corresponded the left and right images each to the center image. I did not find that I needed to manually correct the correspondences I defined.

To compute the homography, I set up two equations for each pair of correspondence points, as described in https://math.stackexchange.com/a/1289595. Unlike the MathOverflow post, I set h9 = 1, so I only had to solve for 8 variables.

Of course, using 10 - 12 pairs of correspondences overdetermined my system, so I used np.linalg.lstsq to find a best-fit solution.

Warp Image

After computing the homography H from points1 to points2 (for example), I applied H on the corners of the image I wanted to warp to find the bounds of the warped image. Then I interpolated the image and used the inverse of H and the interpolation to compute the appropriate value for every pixel inside the warped image.

Image Rectification

Applying this technique to the images I wanted to rectify yielded the following results:

Rectification 1: A painting

Rectification 2: My laptop on a bed

Blending Images:

I applied the warping technique to each image set, and then appropriately padded the resulting images to obtain 2 or 3 sets of warped images I could stack on top of each other to form the final image. To combine the images, I used np.maximum to take the maximum value at each pixel. The result images have very few artifacts.

Mosaic 1: The view outside my room

Mosaic 2: A dirty kitchen

Mosaic 3: A foreboding pantry

What I’ve Learned

I learned that creating a panorama only requires a little linear algebra to do properly. The concepts behind this project were not very difficult. The most difficult parts of the assignment were figuring out how to interpolate and apply the homography to the images properly, followed by figuring out how to stitch the images together.

Part B: Automatic Correspondence Point Selection

Overview

In this part, we use more advanced techniques to detect corners in images and use feature matching to associate these corners across images to generate correspondence points automatically.

Detecting Corner Features


We use Harris Corner detection to detect corners. I used the provided detection code, and set min_distance to 30 to trim the corners returned to a manageable number. For example, here is the set of Harris corners I obtained on one of my images:

Using Adaptive Non-Maximal Suppression, I selected 200 corners in each image that were well-spaced. For example, here are the selected Harris corners on the same image:

Extracting a Feature Descriptor

For each selected Harris corner, I

I used the final length 64 array as the feature for that corner.

Feature Matching

I used the provided dist2 function to find the pairwise distances between features in two images that I wanted to stitch together. For each feature in the first image, I computed the ratio of the distances to the first and second nearest features in the second image. Then I selected all the features with a ratio less than 0.2 as features that were probably also present in the second image.

I associated all of the selected features from the first image with the closest feature in the second image, and obtained a list of corresponding points as a result

Robust Homography Estimate

I used the RANSAC algorithm to iteratively select 4 correspondences, compute a provisional homography, and check how many points in the first image are satisfactorily mapped to their corresponding point in the second image by the homography.

I ran this loop until the provisional homography acceptably mapped at least 8 points. I used these 8 (or more) correspondences to compute the final homography, and used the techniques from part A to apply the homography to the entire image, and stitch the images together.

Final Results

Overall, the automatic correspondence points performed just as well as the manual points.

Mosaic 1: The view outside my room

From automatic correspondence points

From manual correspondence points:

        

Mosaic 2: A dirty kitchen

From automatic correspondence points

From manual correspondence points:

Mosaic 3: A foreboding pantry

From automatic correspondence points:

From manual correspondence points:

        

What I’ve Learned

The coolest thing I learned in part b is that automatically selecting correspondence points can be accomplished fairly reliably by simple linear algebra techniques. Before taking this course, I would have imagined that something like this would be a huge technical undertaking and would be very error prone.