Project 5: Auto-Stitching Photo Mosaics

Harish Palani (CS 194-26)

Part A: Image Warping & Mosaicing

1.   Shoot the Pictures

For this project, I used the following three perspectives captured at a local park here in Portland, Oregon. These shots all revolve around a basketball hoop, which forms a key focal point to be used when aligning images for stitching.

<matplotlib.image.AxesImage at 0x1d60f2f9640>

I chose to blend the center and right perspectives, zeroing in on three objects present in each perspective: the basketball hoop, the lamppost, and a street sign in the background. The final points selected for alignment can be seen below.

[<matplotlib.lines.Line2D at 0x1d60f395e80>]

2.   Recover Homographies

With correspondences defined by hand as shown above, I had to recover the parameters of the transformation between the two images to properly inform subsequent warping & rectification steps. The resulting homography matrix H is shown below.

 0.664500 -0.003568  328.585141
-0.100352  0.856832   35.824939
-0.000295 -0.000003    1.000000

3.   Image Warping & Rectification

Warping and rectification algorithms for this part were tested on the right perspective displayed above, with the warped output ultimately blended wtih the center perspective in the final mosaic. Results for this section are shown below.

<matplotlib.image.AxesImage at 0x1d60f640a90>

4.   From Image to Mosaic

To form a realistic mosaic from these two perspectives, I kept the center perspective unchanged and blended it with the warped version of right perspective as shown above. This yields the following raw output — not bad!

<matplotlib.image.AxesImage at 0x1d60f6264c0>

Cropping yields the following final output, with minimal artifacts at first glance. Upon closer inspection, the misaligned windows and lamppost in the background — along the seam of the two perspectives — reveal themselves to be perhaps the most glaring errors present.

<matplotlib.image.AxesImage at 0x1d60f69b760>

What you've learned (so far): This has been an incredibly interesting project with many practical lessons. Perhaps the coolest of all has been developing an general understanding of homographies and mosaicing, both of which lie at the core of the panorama feature used on smartphone cameras every day — not to mention photo spheres, Google Street View, and other incredible innovations!

Part B: Feature Matching for Autostitching

1.   Harris Interest Point Detection

After condensing the code from Part A into dedicated computeH and warpImage functions, the provided starter code was used to obtain Harris corners for the center and right perspectives from above. These detected interest points (5830 and 6698 for the two perspectives, respectively) are overlaid below.

[<matplotlib.lines.Line2D at 0x1d60f8ada00>]

2.   Adaptive Non-Maximal Suppression

I then implemented ANMS to select 250 interest points from each image, shown below. These points, selected from the overall Harris output, have a noticeably more uniform spatial distribution across the images in comparison.

[<matplotlib.lines.Line2D at 0x1d613bf2910>]

3.   Feature Description & Matching

Here, I first extracted axis-aligned 8x8 patches for each window of size 40x40 in the image. Treating each of these multi-scale oriented patches (MOPS) as features, I then applied a threshold of 0.7 to match pairs with similar appearances, yielding a smaller set of points shown for each image below.

[<matplotlib.lines.Line2D at 0x1d6146e3b50>]

4.   RANSAC Homography Estimation

Finally, I applied RANSAC to select the final points for alignment, computing robust homography estimates for each image. After to applying geometric constraints and rejecting remaining outliers, RANSAC run with an epsilon value of 4.0 yielded the four points shown below for each image.

[<matplotlib.lines.Line2D at 0x1d61d518a00>]

5.   Image to Mosaic, Round 2

Blending the two perspectives with these interest points yields the raw mosaic below, obtained using container functions for the approach employed in Part A.

<matplotlib.image.AxesImage at 0x1d61cbc1c70>

For reference, manual alignment yielded the following raw mosaic for the same input perspectives in Part A.

<matplotlib.image.AxesImage at 0x1d61d0e7f10>

Cropping this output yields the following image, with the result containing minimal indications of panoramic blending. This output is a noticeable improvement over the manually aligned images of Part A, with the only blemish being a slight misalignment along one of the sidelines of the basketball court.

<matplotlib.image.AxesImage at 0x1d61d144370>

The algorithm also performed well when blending the center with the left perspective in place of the right, with the raw and cropped outputs shown below. Again, only a single slight blemish exists along one of the shadows present on the basketball court.

<matplotlib.image.AxesImage at 0x1d61d1d8160>

For the final of my three mosaics, I blended both of the outputs shown above to obtain a full panorama of the scene, combining the left, center, and right perspectives in the process. This also yielded strong results, with the lone blemishes already present in the inputs as a result of prior mosaicing operations.

<matplotlib.image.AxesImage at 0x1d614717310>

What you've learned: I really enjoyed the practical nature of this part of the project, as it involved implementing findings from a publication. This is a skill which can be applied in countless forums between industry and academia, and it was great being able to practice it to bring image autostitching to reality.