Project 5 CS 194-26: Image Warping and Mosaicing

By Diego Uribe

Project Description

In this project I worked on creating an image mosaic out of various pairs of pictures I took. For Part A, I manually selected correspondance points between two images in their overlapping section. Using these points I computed a homography from image 1 to image 2, used this homography to warp the image 1 and them blended the warped image 1 to image 2. In Part B, I repeated this process; however, instead of manually selecting the correspondance points I automated this by following the MORS paper from Microsoft Research distributed by the professor.

Part A: Manual Image Warping and Mosaicing

Part 1: Shoot the Pictures

In this part I took two pictures of a house from the same point of view but with different view directions. You can see the pictures below. Also, I computed the point correspondances between the two images. I calculated correspondances between 38 points. I am also including other pictures below.

Pictures

Building 1

Building 2

Buildings Point Correspondances

Here are some other pictures I took.

Indoor (Stairs) 1

Indoor (Stairs) 2

Indoor (Painting) 1

Indoor (Painting) 2

Part 2: Recover Homograhies

For this part I do not have any images to show. Please look at the code for a working implementation to compute the homography between the images above. To compute H I set up the Linear System of Equations (Ah = 0) and solved it by computing the SVD of A. The vector h corresponds to the 8 column of V (right eigenvector). The section below will show that my implementation worked since I will be warping images!

Part 3: Warp the Images

In this section I implemented the warpImage function. I implemented it using inverse warping. I used RectBivariateSpline as the interpolation function. Below I compare my warp result for building 1 with the warp result using sk.transform.warp. As you can see both results are the same!

Warp of Building 1 into the computed homography between Building 1 and 2 (my implementation)

Warp of Building 1 into the computed homography between Building 1 and 2 (sklearn implementation)

Part 4: Image Rectification

In this part, I rectified two images. The first image I rectified was (La Flagellazione di Cristo) and the second one was (St. Lucy Altarpiece). To rectify an image, I first selected 4 correspondance points in the image to be rectified. These 4 points are the boundries/edges of a polygon. The polygon described by these 4 points is the section of the image to be rectified. Then, I defined another polygon which was just a rectangle. To rectify the image I just warp the original image to the rectangle. Please see the results below.

La Flagellazione di Cristo

Original La Flagellazione di Cristo

La Flagellazione Rectified

St. Lucy Altarpiece

St. Lucy Altarpiece

St. Lucy Altarpiece Rectified

Part 5: Blend the images into a mosaic

Below you can see the three mosaics! The warping worked well; however, I had trouble impementing blending. I had to just take the max of the two pixel values for each pixel in the mosaic (overallaped images) since there were sections of the warped image that were all black. Thus, implementing something like averaging or linear blending gave me a lot of trouble. Other than that you will notice that a section of the mosaic is a little dimmer than the other one, this is becuase when I saved the warped image into my computer (to then compute to mosaic) I had to normalize the pixel values; thus, there is a slight difference in the tone.

Mosaic 1: Buildings

Building 1

Building 2

Building 1 Warped

Buildings Mosiac!

Mosaic 2: Indoor Stairs

Indoor (Stairs) 1

Indoor (Stairs) 2

Indoor (Stairs) 1 Warped

Indoor Stairs Mosiac!

Mosaic 3: Indoor Painting

Indoor (Painting) 1

Indoor (Painting) 2

Indoor (Painting) 1 Warped

Indoor Painting Mosiac!

Part B: Automated Image Warping and Mosaicing (MORS Paper)

This part builds on part A of the project. As you will see below, in this section I developed an efficient algorithm/processing pipeline that takes two images, finds corners for each of them, matches the corresponding corners between the two images, and then feeds this corresponding points into Part A to compute a Mosaic of the two images!

Part 1: Detecting corner features in an image

In this part I used the provided code to find Harris corners. I changed the provided code to use corner_peaks instead of peak_local_max. Similarly, I did not have time to implement ANMS; however, an argument to corner_peaks called "min_distance" allowed to simulate the ANMS algorithm and narrow down the list of corners extracted for the images. Min distance essentially means that no two corners can be closer then min_distance from each other. This reduce the number of corners while guaranteeing that they are distributed uniformly throughout the image.

I will show the corner detection algorithm for my building images!

Corners in Buildings (min_distance = 1).

Corners in Buildings (min_distance = 5).

Corners in Buildings (min_distance = 10).

Part 2: Extracting a Feature Descriptor for each feature point

In this part I used the extracted corners (min_distance = 10) and extracted a feature descriptor for each of the corners. I extracted approximately 500 corners for each image. The feature descriptor consisted of a 40 x 40 path of pixels with the corner at the center. Each feature descriptor was the downslampled/rescaled to be 8x8. Finally, I normalized each patch to have a mean of 0 and a standard deviation of 1 to prevent brightness from affecting the computations in Part 3. Below you can see the first 64 extracted feature patches for the building1 image and the first 64 extracted feature patches for building2 image. I first display the 8x8 patches in full color, then I normalize them and display them again.

Building 1 Corner Descriptors

Building 1 Normalized Corner Descriptors

Building 2 Corner Descriptors

Building 2 Normalized Corner Descriptors

Part 3: Feature Descriptor Matching

In this part, I matched the corners from one image with the other image. I followed the algorithm described in the paper. Essentially, I found the 1-NN / 2-NN ratio error and if it was less than 0.3 then the corners correspond to each other. You can see the results below! I first display the initial 500 corners in each image and then show the result of running the matching algorithm.

Corners in Buildings (min_distance = 10).

Matched Corners in Buildings

Part 4: RANSAC Algorithm to compute Homography

In this part I implemented the 4-point RANSAC algorithm described in lecture. This algorithm takes as input the initial set of mathcing corners of image1 and image2 and outputs the inliners for each image. Then, I used the inliners to warp image 1. Here are the results.

Building 1 Warped (Matching points computed using RAMSAC)

Part 5: Auto generated Mosaics

In this part, I used to set of inliners outputed by the RANSAC algorithm to find a homography from image 1 to image 2. Then using this homography I warped image 1 and finally blended it with image 2 to compute a MOSAIC. Below are the results:

Mosaic 1: Buildings

Building 1 Warped (RAMSAC)

Buildings Automatic Mosiac!

Buildings Manual Mosiac!

Mosaic 2: Indoor Stairs

Indoor (Stairs) 1 Warped (RAMSAC)

Indoor (Stairs) Automatic Mosiac!

Indoor (Stairs) Manual Mosiac!

Mosaic 3: Indoor Painting

Indoor (Painting) 1 Warped (RAMSAC)

Indoor (Painting) Automatic Mosiac!

Indoor (Painting) Manual Mosiac!