CS 194-26 Project 6 Part A: Image Warping and Mosaicing

Emily Chang cs194-26-aeu



Overview

In this project, we create image mosaics similar to a panorama. The way we did this was by taking multiple smaller photos of a scene, warping them into a single image, and stiching them together into a mosaic.



Shooting the Pictures

All photos were taken with an iPhone 7 during a trip to Lake Tahoe.



Recover Homographies

To recover homographies, we wanted to first calculate the transformation matrix H. In our case, we warped all our images to image 2, and so we had a unique matrix between image pairs (image1, image2) and (image3, image2). The way we dd this was by using the transformation function p' = Hp, and using 4 points that appeared in both images as our respective p values, and using least squares to solve for H. More specifically, the homography matrix we aim to solve for is this:



Warp the Images

To warp the images, we took our original image 1 and image 3 and warped them individually to align to image 2 with the H matrix that we computed previously. Here are our results in warping image 1 and image 3 to different perspectives.

image 1 warp from a side view

image 1 warp from a bottom view

image 3 warp from a side view



Blend the images into a mosaic

With the warped images, we could combine them into a mosaic by overlaying them on top of each other and blending. Here are some images from Lake Tahoe.

image 1

image 2

image 3

mosiac

image 1

image 2

mosiac

image 1

image 2

mosiac

The first photo shows a bit of ghosting, as it was tricky to find the precise pixel alignments.

I learned in this project how to apply homographies to make an image mosiac, which was really interesting and made me really appreciate the creation of panoramas.





Part B: Feature Matching for Autostitching



Overview

In this part, our goal is to automatically stitch images together using ideas from the paper “Multi-Image Matching using Multi-Scale Oriented Patches” by Brown et al.



Harris Interest Point Detector

We used the Harris Interest Point detector to detect corners. To limit the number of points from each image, we used ANMS with a c-robust threshold of 0.9 to select our points, using this formula.

We then selected the points with the largest radius. Here are the results of applying ANMS.



Feature Descriptor

Using the points we've extracted in the previous part, we then create 41x41 patches around these points, which we downsize to be 8x8 and normalize to use as our features.



Feature Matching

We then match points from one image to another, and the way we accomplish this is by compairing the SSD between each feature. We consider our "best matches" to be points for which the distance between point1 and point2 is less than 0.6 x the second smallest distance. Here is an illustration of the points we've extracted:



4-point RANSAC

To ensure that we use correct pairings in our homography, we applied RANSAC to take 4 points from the matches we extracted previously which transform all our points the most accurately. Here are our results:

image 1

image 2

manual

auto

image 1

image 2

manual

auto

image 1

image 2

mosiac

Overall, it seems to have done a better job than when I manually selected my points!

Reflections

This project was really interesting! Computing mosiacs turned out to be a much more complex process than I expected. I thought the most interesting part was the idea of feature matching to select the "best matches" - it's a really cool idea worked well!