Manual Stitching

Automatic Stitching

# Manual Panorama Stitching

## Context

## Part 1: Taking the photos

## Part 2: Warping with Homographies

## Part 3: Rectification

## Part 4: Panoramas!

## Warped

# Automatic Panorama Stitching

## Context

## Step 1: Supressed Harris Corners

## Original

## Supressed

## Step 2-3: Feature Extraction & Matching

## Supressed

## Supressed

## Matched

## Matched

## Step 4: RANSAC and Homography Estimation

## Manual

## Automatic

## Manual

## Automatic

## Manual

## Automatic

## Automatic

## Automatic

Automatic Stitching

By manually picking correspondence points between two images of the same planar space we can compute a homography that warps one image such that it lines up with the other, and by this way we can stitch together panoramas!

I took a few photos using a Lumix DX-5 and my Nexus 5, these are just a few of the ones I took in my (messy) apartment and Las Vegas.

As mentioned earlier, we can select correspondence points in order to find out how an image was warped, and then rectify them such that the images exist on the same 3d plane. To do this we needed to solve a system of linear equations with 8 unknown variables. Because each correspondence gives us 2 equations each, we could technically solve it with only four correspondences, but to account for possible noise and other shifts more are used to overconstrain the system and is then solved using the least-squares method. Here are some of the rectified images that will be later used for panoramas

In order to ensure that the warpings that were found above were correct, I tested with some miscellaneous photos where objects in the photos have known shapes, and the second set of correspondence points would be set such that it would rectify them to, for example a rectangle or square. The reason why the following photos look blurry in some areas is that the center of the image after warping is somewhere not actually in the center, so the blurred parts are actually what would be in your peripheral vision if you were looking at the photo in its real center. It is also important to note that these transformations are not creating any new information, it is only using the pixels that already exist in the image.

Finally using all that was found above, we can rectify images to match other images and then use weighted blending in order to stitch them together. For blending the euclidean distance transform was found for each image, and a mask was procedural generated by comparing where (dist_im1 > dist_im2). With this and gaussian blending from project 3, smoother transitions were made.

For the last one the colors were off due to the lights in the room changing, and it is very warped as I used the last image as the reference and warped all else to it, rather than the center. It still turned out well for a first attempt I think! As comparison this is Google's panorama of the same scene, though the light was more consistent.

Next, we can automatically stitch panoramas together by finding the harris corners in the images, finding those that are locally maximum and spread evenly, and then finding those that create the optimal homography.

Using the harris corner detector given to us will result in over 120k points, way more than need to use. So here we use Adaptive NonMaximal Supression in order to find the strongest corners in a way that they are evenly distributed across an image. How this works is we set a radius for each corner, and if it is the local maximum in that radius and significantly higher than its neighbors, we keep it, otherwise supress it. The radius starts high then, reduces in order to achieve the minimal number of points, in this case 500.

Next we match features by performing nearest neighbor matching on 8x8 patches around the points of interest. These are taken from 40x40 patches, gain normalized, blurred with a gaussian kernel, then resized. This works to achieve the same pixels when comparing patches. Patches are compared using SSD. First the first nearest neighbors are found, then the second, the ratio of first to second is taken and thresholded to determine which points to keep. This results in the best matching and takes into account cases where points are very similar.

Finally we perform RANSAC on the matched points to find the best homography we do this by taking 4 random points, computing the homography, and seeing how many points agree with this transformation, once a certain number of iterations is performed, or a satisfactory percentage of the points agree, the 'inliers' are now used to compute the homography to warp and stitch the images together as in the first part of the project

This was an awesome project to do because it sort of is a culmination of all the other projects into one. Though it was a bit more difficult, it is crazy how much one can do with the information that is within photos that we take, and what can be done with them to give an entirely new perspective. What was more awesome was the automatic feature detection which uses fairly simple methods to find the points necessary. In most cases it made even better panoramas than those from my manual points. Being able to quickly discern matching features between images opens up a lot of possibilities in video processing and other areas of computer vision.