Project 4: Image Warping and Mosaicing

CS 194-26: Intro. to Computer Vision & Computational Photography, Fa21

Doyoung Kim


Project 4A Overview

  In the previous project, we have discovered how inverse warping can be used to transform an image using projections. In this project, we use projective transformation to change the viewing angle of the image and rectify. An application of this can be mosaicing image as well, which creates a kind of panoramic image.


Part.1 Shoot the pictures

sticker
book


  The images above are going to be used as an example to simply show the change of viewing angle.







  The images above are going to be used as an example to show the change of viewing angle. And then, the images would be used to be stitched to create a panoramic image with mosaicing.

Part. 2 Recover Homographies

  Homography refers to the matrix that allows the projective transformation from one image to the other image. And for use, this is used to change the viewing angle. Solving the following matrix for h vector will give us the necessary points for homographies.



Part. 3 Warp the Images

  As we obtained the homographies, now we can apply inverse warping to get the image of changed viewing angle. And the following images are some of those examples. I tried to show how the object in front would look like if we changed the viewing angle as we are looking from top to bottom.

sticker
rectified

book
rectified




Part. 4 Blend Images into a Mosaic

  Using the same idea from part 3, I can rectify images and stitch them together to create a panoramic image. We first pad the images so we do not lose information. Then, obtain the common points between the images in order to stitch them in a correct way.


left
middle
right

rectified
left
rectified
middle
rectified
right
blended

left
middle
right

rectified
left
rectified
middle
rectified
right

blended


left
middle
right

rectified
left
rectified
middle
rectified
right

blended


Project 4B Overview

  Opposed to manually selecting the points to compute the homography from project 4A, now we discover a way to select points automatically using harris corners, feature descriptors, feature matching, and RANSAC. This way, we can automatically stitch images without any manual work.


Part.1 Detecting Corner Features in an Image

  We start with computing corner strength, which is used in Harris Corner Detection Algorithm. It represents how strong of an edge there is for the given coordinate in the image. And the corner strengh is given by the folloing equation:



  The following images represent an overlay of entire harris corner points on the image.

left
right


  As there are too many points given by the harris' algorithm, we need a clever way to sift necessary points first, and this is where Adaptive Non-Maximal Suppression (ANMS) kicks in. And this is how the algorithm for anms works:

    1. Sort harris corner strength by its coordinate in a list.
    2. Starting with the coordinate with highest corner strength, compute the distance from this point to all the others points(ri) from corner strength, which also satisfies the constraint and get the minimum ri.
    3. Sort the coordinates again by highest ri's
    4.Get top 500 of them

  The radius referred in the algorithm is given by the following equation:


  Following are the images of the original images and 500 anms points overlayed on them.

original left
original right
anms left
anms right


original left
original right
anms left
anms right


original left
original right
anms left
anms right


Part.2 Extracting Feature Descriptors

  After obtaining selective corner points for both images to be stitched from ANMS, we need a way to figure out which corner/edge from one point matches with the one in the other image. And this is where we use the feature descriptor.

  For each ANMS point from both image, we take a 40 x 40 matrix(patch) from each points. Then, we downscale it to 5x5 and normalize the values in the patch. We obtain this for each anms points from both images to be stitched and figure out which one matches with which by following the algorithm below:

 Consider image A & B:

   1. for each feature descriptor of A, computer SSD with feature descriptors of B and store them in a list.
   2. for each feature descriptor A, now there will be list of errors of SSD with B. Now, sort the list of errors in ascending order to take the ratio of best two.
   3. If the ratio, err(best)/err(second), is larger than 0.6, we don't consider this interest point.
   4. After getting list of best interest points for both A and B, only keep the pairs that are each other's top choices.

  Patching all the patches from each images on a single plane would look like the following images:

descriptor left
descriptor right

descriptor left
descriptor right

descriptor left
descriptor right



 Regarding to the mentioned threshold from 3. of the algorithm is chosen by the following graph:

descriptor left


  And the following images are the comparision between anms points and the points left after feature matching:

matching left
matching right
anms left
anms right


matching left
matching right
anms left
anms right


matching left
matching right
anms left
anms right

Part.3 RANdom SAmple Consensus (RANSAC)

  With the feature descriptor and feature matching, the sifted points are most pointing at the same regions from both images. However, there still exists some outliers. And we need a robust way to sift these out once more, and we use this method known as Random Sample Consensus. And the following is how it works:

 Consider set of points between A,B:

  1. First, randomly pick 4 random pairs of feature matches from image A
  2. compute homography with these points.
  3. Now using this homography, h, compute ssd between np.matmul(hp) and p'(corresponding point). And if this ssd is below threshold error, then increase the number of inlier counter.
  4.While iterating this for number of times(1000), keep track of the homography with maximum number of inliers and use that to warp as we did in proj4A.

 After sifting once more with RANSAC, we get the following images.

matching left
matching right
RANSAC left
RANSAC right


matching left
matching right
RANSAC left
RANSAC right


matching left
matching right
RANSAC left
RANSAC right


  Comparing the points gained after feature matching and the ones after RANSAC, you can notice that some of the outliers are removed. Now, we compare the results of stitced image that is obtained by checking the points manually and automatically in the following:

Manual
Auto


Manual
Auto


Manual
Auto


4B Reflection


  Out of all the knowledge I gained during the project, the coolest aspect I think is the feature matching through feature descriptors and sifting out the outliers with RANSAC algorithm.