CS 194-26 Computational Photography

IMAGE WARPING and MOSAICING Part A

Victor Vong, CS194-26-acq




Section I: Overview

In this project, we warped images in order to stitch and blend them together to create mosaics. We also tested the accuracy of our warping by rectifing images, which is changing the perspective of an object in an image.

Section II: Homographies

In order to warp from points of set p_a from Image A to points of set p_b in Image B, we solve for a homography (transformation) which will allow use to transform any points in A to the appropriate location in B. We model our homographies as projective transformations with eight degrees of freedom (eight unknowns). Selecting 4 or more correspondences (points common to the two images) allows us to use least-squares to solve for H (homography).


System of Equations to solve for Homography
Representation of the Homography in use

Section 2: Rectified Images

In order to show our calculation of the homography between two images is correct, we take an image of a planar surface and warp it so that the plane is frontal-parallel.


Section III: Image Blending and Mosaics

Now that we can accurately find our homography between two images and warp the image accordingly, we create our mosaic. In order to create our mosaics, we warp imA to a left side. We then warp imB to warped imA using a set of similar correspondences. We then overlap the warped images and blend them together. In this case we used multiresolution blending.


Selected 4 corners of imA
Selected correspondences in blank canvas
Selected correspondences for imB to also be used for warped imA
Selected same correspondences for warped imA

The final step was to select alignment points (using project 3 alignment and multiresoluton code) and multiresolution blend the images.


Section IV: What I learned

I learned that stitching together mosaics is actually remarkably simple and easy. The hardest part is picking a blending algorithm that works and taking pictures that behave well. I also learned how well inverse transform (compared to forward transform) and homographies work. I was sadly sick for project 4 and didnt get a chance to fully understand it until now.


CS 194-26 Computational Photography

IMAGE WARPING and MOSAICING Part B

Victor Vong, CS194-26-acq


Section I: Automatic Corner Detection and Adaptive Non-Maximal Suppression

The algorithm used for this is the Harris corner detector. The algorithm detects x and y derivatives for each pixel in the image which are used to compute a corner reponse for each pixel in the image. The corner reponses are then thresholded to get the most corner-like points in the image. Regions of nearby corners are then further collapsed into a single corner point by taking the local maxima of the region. As you can see below without any thresholding an overly abundant amount of points are chosen. To resolve this we use Adaptive Non-Maximal Suppression


Left Image Harris Corners
Right Image Harris Corners

Adaptive Non-Maximal Suppression calculates a surpression radius for each corner point. The surpression radius is the minimum distance from the current point and every other point that has a sufficiently higher corner response than the current point. The final points selected are then chosen from the points with the highest surpression radii (in our case the top 500 points).


Left Image Corner Points after Adaptive Non-Maximal Suppression
Right Image Corner Points after Adaptive Non-Maximal Suppression

Section II: Extracting Feature Descriptors and Matching Features

We generate features descriptors in order to match corresponding features between the two images. To do this, we extact image patches centered around each of the points of interest. We create large image patches (40x40 in this case)around the point of interest and scale them down to a smaller resolution (8x8 in this case). We normalize each patch by subtracting its mean and dividing by its standard deviation in order to have create invariance to intensity shifts between the images. We then detect the best matches by computing the sum of squared difference (SSD) between each of the patches in one image to each of the patches in the other image. The ratio between the best match and second best match is calculated for each patch and used to threshold which featrues we choose.


Left Image Matched Features
Right Image Matched Features

Section III: Using RANSAC to remove Outliers

Feature Matching still leaves room to be desired. It can over estimate the accuracy and create outliers. We can eliminate these outliers by using random sample consensus (RANSAC). We randomly sample 4 correspondences and use it to compute a homography. The distances between the predicted points and observed points (after applying thw homography) are calculated and used to determine the number of inliers for this sample of correspondences. The inliers are determined as points that have prediction errors below some tolerance level. This process is then repeated for a number iterations (in our case 500). We then use all the inliers of the sample with the highest number of inliers to compute final homography.


Left Image Final Correspondence points after RANSAC
Right Image Final Correspondence points after RANSAC

Section IV: Panorama Creation

After we found the best fitting homorgraphy we create the panorama same as before. I personally only succeeded in making one. As I kept tuning my parameters my other photos never seemed to get enough points.


Manual Panorama
Automatic Panorama

Section V: Things I Learned

I thought this project was honestly fun. If it wasn't for the bombardment of assignments I had I would've liked to do some of th e extra credit. Implementing feature matching and ransac definitely was simple and proved to be very powerful. Automatically detecting features is a great way to align images, but doesn't always work well if the detector can't match features well. For Ransac if you increase the tolerance too much, it'll use inaccurate features, which will make your panorama worse.