Justin D. Norman Project 4 Part 1

The goal of this project is to demonstrate the foundational principles of image warping through the applied technique of image mosaicing. Creating a mosaic involves several steps on two or more images:

To test out my pipeline I ran to the top of tank hill in San Francisco:

1. Shoot and Digitize the Pictures

I took (many) pictures but I like these two as they provided a useful artifact the bench in the lower right that I could use later to assign keypoints for correcpondences.

tank1 tank2

2. Recover homographies

The next step was to select the keypoints and (and store them for the many future iterations of the image pipeline). I used some of my learning from previous projects to write a ginput() UI, the results of which are below:

tank1 tank2

From here, I was ready to recover the the homographies, which involves the following transformation p’=Hp, where H is a 3x3 matrix with 8 degrees of freedom.

I implemented this by creating a function computeH(im1_pts, im2_pts) which sets up a linear system of n equations (i.e. a matrix equation of the form Ah=b where h is a vector holding the 8 unknown entries of H)

3. Warp the images

From here, I implemented a function warper(im, width, height, H) which uses the parameters of the homography to warp the images into thier new form. I used the input parameters height and width to set the size of the new image. The result is below for the first image:

warp1

...and for the second

warp2

4. Blend images into a mosaic

Now that I have the two individual warped images, I chose to blend them two together (rather than simply add them up) to minimize the edge artifacts as much as possible. I used the weighted alpha/beta blend technique which definitely did not produce as ideal of an result as I was expecting. There's some ghosting for sure and lots of edge artifacts.

warp1

Here is another example from the other side of the hill

tank1 tank2

warp1_l

warp2_l

warp1

Overall I learned:

Justin D. Norman Project 4 Part 2

The overall goal of the 2nd part of the project is to is to create a system for automatically stitching images into a mosaic. A secondary goal is to learn how to read and implement a research paper. The steps involved in this process are:

Because I struggled so much with Part 1 I needed to do a fair amount of redisign of the image pipeline. In particular, I needed to re-implement the warping function and also deal with the size-of-resulting image issue, to which I chose a computed bounding box based on the four corners of the image a the known homography. I also needed to update the process of blending and stitching together source and target images.

2.1 Detecting corner features in an image

2.1.1 Start with Harris Interest Point Detector

Detecting corners was a simple enough task to do with the given get_harris_corners function that was provided. The results of those detections with a min distance set by parameter are below.

tank1_harris tank2_harris

2.1.2 Implement Adaptive Non-Maximal Supression

Even with a higher minimum distance, it's still necessary to restrict most of the points returned by the corner detector. This is the function of ANMS in this project. I chose to use 400 interest points as an input parameter to the calculation and a c (or subpression radius of 0.9). The implementation was adapted from Brown et. al's Multi-Image Matching using Multi-Scale Oriented Patches.

tank1_anms tank2_anms

2.2 Extracting a Feature Descriptor for each feature point

2.2.1 Implement Feature Descriptor extraction

Using the Harris/ANMS corner points, we can now shift focus to extrating the feature descriptors for each of the points in order to setup the feature matching step. This is accomplished by selecting a 40x40 sample centered at the each point, blurring that sample using a gaussian, and then resizing it to a 8x8 segment know referred to as a feature descriptor.

2.2.2 Implement Feature Matching

As stated before, these feature descriptors were then used to further restrict the number of points. The SSD is computed between each pair of corresponding feature descriptors, and only pick the best (and second best) matches. Those are displayed below.

tank3_fd tank4_fd

2.2.3 Use a robust method (RANSAC) to compute a homography

Let's Keep reducing. RANSAC is a simple algorithm that works quite well. The idea is to select randomly 4 points and then to compute a homography and perform a check to determine the degree to which other points form a "consensus" with the homography. Every time through the algorithm if the consensus points are better matches than the previous steps, these are selected as the new best. 1000 iterations are selected in this implementation and a very consistent set of points are selected. The results are below.

tank3_rans tank4_rans

2.3 Proceed as in the first part to produce a mosaic

The final step was to stitch together the images and then also blend them into a mosaic. Here are the results from 3 image pairs taken all the way through the pipeline. The first is the two images I took on tank hill. The second and third pair are taken as accidental inputs to a panorama while I was on a motorcycle trip in Portugal just prior to the COVID-19 pandemic.

tank34 port12

port34