CS 194-26 Project 4A: Image Warping and Mosaicing

The goal of the project was to dive into the concepts we have been learning over the past few weeks such as homography and image warping. Utilizing homographies with image warping and blending techniques, I was able to create image mosaics at the end of this project.

Shoot and Digitize Images

Before beginning to code, the first task of the project was to find appropriate images that we could use. I need to ensure that I took the photographs in a way that allowed the transforms between them to be projective. I accomplished this by taking pictures from the same p.o.v. but with different angles. I used my Iphone with AE/AF lock to ensure that the setting would remain the same and not cause any issues later on.

One Example of the Images that I took (more can be seen below where I rectify and blend images into a mosaic):

image image

Recover Homographies

To compute the homography matrix, I first collected points using Python's ginput function as we have done in previous projects. From there I was able to compute p'=Hp and find H. I did this by following an article that I found on Piazza: https://towardsdatascience.com/estimating-a-homography-matrix-522c70ec4b2c. This article discussed how we can recover our homography matrix for co-planar points. It essentially explained that two 2D images are related by a homography H, if both view the same plane from a different angle. So, I was able to follow the equation shown below.

image

Once, I had created the matrices I needed, I was able to utilize least squares to create my 3x3 homography matrix. Overall, this task helped me find the the homography matrix H that calculates the point(x',y') in the destination image which relates to the point(x,y) in the source image. It was amazing to see how we can realate two 2D images in the same plane even with different angles!

Warp the Images

We can now use the homography we found in the last part and use it to warp our images. I was able to use inverse warping, I applied the inverse homography matrix to every pixel and applied interpolation as needed, this allowed me to warp my first image to my second. We can see the power of the homography function working with the warp below in the image rectification section!

Image Rectification

This part of the project gave me an opportunity to check that my homography and warp functions were working correctly. All that I had to do was select my correspondence points as my image1 points and define another set of correspondences by hand as my image2 points, compute the homography matrix with these points, and lastly warp the image so that the plane was parallel. Examples of this can be seen below.

Example 1: Lakers Poster

image image

Example 2: My Friend Julian's Room

Here the goal was to attempt and try to create a frontal view of the further away posters, which still seemed to work well.

image image

Blending the Images Into a Mosaic

To create a mosaic, I utilized selected points along with the corners of each of the images to create a large enough canvas for the mosaic to render and display well. One thing to note, for my sunset mosaic it seems like the two cars in bottom right have clashed, but that only occurred because the cars were moving despite me taking them within seconds of each other.

Mosaic 1: Sunset Mosaic

image image

image

Mosaic2: Snappa Room

image image

image

Mosaic3: Downstairs Pool Room

image image

image

What I Have Learned

During lecture I was not really able to connect how all these concepts would really work together. But it is amazing to see that with simply solving for the homography matrix and using warping we are able to change the perspective of the image quite easily. Additionally, it was awesome to get some insight to see how these panoramic pictures are created and made as I have used this feature many times on my Iphone and have always wondered how it was done

CS 194-26 Project 4B: Feature Matching for Autostitching

Overview of Part B of the Project

For the second part of this project, our goal was to use the techniques of automatic feature-based alignment that we had recently learnt in lecture. I was able to see those concepts come to life. For instance, I analyzed and worked with Harris Corner Feature Detection, adaptive non-maximal suppression, extracting feature descriptors with patches, and much more. It was really cool to see the automation of it all come to life!

1. Detecting Corner Features with Harris Interest Point Detector

To start, I was able to use the provided starter code to obtain the corner strengths and coordinates for each image. One thing to note is, I had to load the images in as gray-scale when obtaining this data, however I was able to overlay the points on the colored images as you can see below. I have included multiple examples of images with various minimum distances and thresholds. However, I decided to change the minimum distance between the points from 1 to 5 and threshold to 0.004 in order to produce a better visualization

All the left images are for image1, All the right images are for image2

Original Image 1 and 2:

image image

Image 1 and 2 : Min-Distance = 1

image image

Image 1 and 2 : Min-Distance = 3

image image

Image 1 and 2: Min-Distance = 5

image image

We can take it one step further by applying a relative threshold when finding our peak local max. For instance, we can see below a minimum distance of 5 and threshold of 0.004.

Image 1 and 2: Min-Distance = 5, Threshold = 0.04

image image

1.2 Adaptive Non-Maximal Suppression

However, you must be able to tell that the points are very clustered. This is where we can utilize adaptive non-maximal suppression to have a much more even distribution of points, which can be seen below. I was able to use the information on the paper provided to make use of my corner strengths and a changing radius to have a more equal distribution of points and less clustering of points across the image, which would overall improve the panoramic image we will eventually be creating.

image image

2. Extracting a Feature Description

Although we have suppressed the clustering of points, we need to determine how to match them. Thus, in step 2, we begin to implement a feature detector that we can use for matching. I was able to do this by creating 40x40 windows for each interest point. I examined the neighborhood around that interest point in order to find our best feature descriptions. I blurred the image as well as downsampled to an 8x8 neighborhood window. I then subtracted the mean and divided by standard deviation in order to normalize against any biases and gains. An example of features from the image 1 and 2 that we seen above are shown below.

image image

3. Feature Matching

Now, that we extracted our feature descriptors as patches we could move onto feature matching. I was able to accomplish this through using ssd as a metric to determine the distance between pairs of our feature descriptors/patches. I then made use of nearest-neighbors as a metric to determine if it was the best point to keep or not (we just had to divide our first-nn by second-nn, if first-nn is better we store it otherwise discard the inadequate matches).

image image

4. Robust Homographies with Ransac

For the Ransac algorithm, I essentially followed the slide that we saw in lecture about the Ransac loop (shown below)

image

I also began with selecting random feature pairs using numpy's random shuffle method and grabbing four random pairs from it. Yet, one thing to not is an issue that I faced. My points were in the form (y,x) and not (x,y). This was an issue as in order for me to run my computeH to get the homography matrix I need it in (x,y). Thus, I swapped the columns to obtain my points in the form (x,y) before moving onto compute my homography and implementing the rest of the algorithm for Ransac. This further helped us determine which matching features are good by removing outliers.

5. Produce Panoramic/Mosaic

This part of the project, I was able to use my previous create_mosaic function in order to build the panoramic/mosaic photos shown below. I used the same images from Part A, thus the ones on the left are the manually stitched, while the ones on the right are the autostitched images created in Part B of this project.

Mosaic Roof Image Manual: image Mosaic Roof Image Auto: image

Mosaic Snappa Room Image Manual: image Mosaic Snappa Room Image Auto: image

Mosaic Pool Room Image Manual: image Mosaic Pool Room Image Auto: image

What I've Learned

I think my favorite part of this project was experimenting with the various concepts we implemented. From messing around with the radius to the number of iterations on Ransac, it was very cool to see the variety of panoramic and crazy mosaics that could be made. It was also amazing to see how Ransac is able to work so well!