COMPSCI 194-26: Project 4

Kaijie Xu

nortrom@berkeley.edu

Overview

In this project, I manage to create a panoramic photo with image warping and mosaicing.

Part A: Image Warping and Mosaicing

Part 1.1: Shoot the Pictures

The first step is to shoot the pictures in the way where transformation between the pictures is projective and there is overlap to find enough correspondances.

I try to get the photos by standing still and simply rotating to take multiple pictures at first, but it does not work pretty well.

So I come up with the idea to build a scene in Unreal Engine 4 with some official assets and set up a fixed camera to take pictures in different rotation angles, and it seems to work even when I only have the overlap about half of the images

Part 1.2: Recover Homographies

The next step is to recover the homography between two images. We need to find the homography matrix H that calculates the point(x',y') in the target image that corresponds to the point(x,y) in the source image as the following

We need at least 4 points to recover H since it is a 3x3 matrix with 8 degrees of freedom (lower right corner is for scaling, and we set it to 1)

However, to blend more precisely, I set more than 4 points(generally 8 points) for each image and use least-squares to get matrix H as follow

Part 1.3: Image Rectification

I also try to rectify some images to front viewpoint in order to ensure that I have the correct function to find optimal transformation matrix

Here is a sample:

Part 1.4: Blend Images into Mosaic

Now I can blend together multiple images into a mosaic. To simplify, I use 2 images and warp left image and right image together.

Blending is done with alpha blending and a cos function evaluated from 0 to pi/2 over half the image

The result is as below:

The result of the mosaic images taken by my own camera is acceptable but not that perfect

Although it may look wired(my images are too long and the overlap is not that much), the final mosaic result for the UE4 ones is still satisfying

Part B: Autostitching

In part B, we use the algorithm in the paper to automatically select correspondence points in 2 images, compute a robust homography, and finally stitch them together.

Part 2.1: Corner Detection

Firstly, I use the provided harris.py to find potential corners and h-values

Once we have our Harris Corner Points, we apply Adaptive Non-Maximal Suppression to extract the important points from our image as following

ANMS ( n = 350)

Part 2.2: Feature Descriptor Extraction

Once the interest points were chosen in each image, we needed to extract a descriptor for each one to match them.

This is done by taking a 40x40 window around each point and downsizing it to an 8x8 square

Here are two samples (it's clear that they are not correspondent)

Part 2.3: Feature Matching

Once descriptors were calculated for all the points in each image, we could move on to matching.

To match, we just found the nearest neighbor(1-NN) in the second image for each descriptor in the first image by calculating the norm of the difference between descriptors.

We calculate the distance from the first nearest neighbor divided by the distance form the second-nearest neighbor in the second image and only keep the matches that had a value that was less than a certain threshold (0.3 for me)

The matched features for the two images are shown below:

Part 2.4: Blend Images into Mosaic

Using the correspondances generated in Part B, we can blend the images into a mosaic the same way we did in Part A:

As you can see, it is much better than the result I got in Part A

Here is the process of another sample how we auto-stitch two image: