Image Warping and Mosaicing

Shoot the Pictures

I shot the pictures in my friend's room, because he has a lot of cool posters and a nice setup:

First Original Image Second Original Image

Recover Homographies

For recovering homographies, I went through the entire Piazza thread and found this article: https://towardsdatascience.com/estimating-a-homography-matrix-522c70ec4b2c. I needed to find H from p'=Hp. I found this description of the matrix to be particularly helpful:

Matrix Description

I got corresponding points of each image using ginput inside my get_points1 function that I ran in terminal. I selected the points on the image and then copy and pasted the point values into jupyter. I wrote my computeH function and then used it to find the relation between the each set of points in my 2 images.

Warp the Images

I used inverse warping with an inverse H matrix. I initialized all the pixels in the resulting image to 0, so that the non-relevant areas of the image would be black, essentially using a black alpha mask. Then I iterated through each pixel and used the inverse H matrix to set the respective pixel on the resulting image to the warped result.

Image Rectification

To rectify these following examples, I first selected 4 points on each of the original images, and then chose 4 new points on each image that would make each of them a perfect rectangle. I used my get_points1 function to define the corresponding points on the original image (called get_points1 in terminal to select points there, then copy and pasted the values into jupyter), and then created an array for the reference points by hand. I basically eyeballed the image axes and chose which coordinate values would make a good-sized rectangle, sort of surrounding/over the original image. I then passed in the image points and the reference points to my computeH function, and used the resulting H and the original image as parameters for warpImage. For each of the following images, you can easily tell that it has shifted from a side view to a front view.

Original Composite Image Rectified Composite Image
Original DSS Image Warped DSS Image

Blend the images into a mosaic

For the mosaic blending, I had to slightly modify my warpImage function to suit the size of the new canvas. Instead of doing a for loop through just the dimensions of the image, I had to use corners to calculate min and max values for x and y and use that to place both images on the newly defined canvas with a new, larger size. Then, I created an alpha mask for the new canvas using np.zeros, and iterated through the pixels, mapping the pixels from warped image 1 and regular image 2 to the new canvas. This created the stitched effect, and added my photos together nicely. Before creating the mosaic, I also had to use my get_points function here to get corresponding points on both images. I ran that in terminal and copy and pasted the points into jupyter. Here are my few examples:

First Room Image Second Room Image Room Mosaic First Library Image Second Library Image Library Mosaic First Balcony Image Second Balcony Image Balcony Mosaic

What I Learned!

The biggest thing I learned from this project was how matrix multiplication and what I learned in Math 54 can actually be used to find the relationship between various sets of points between two images, and how useful this can actually be. The concept of warping was fascinating to me and it was really cool to see how I could actually warp an image so that the objects of the image were in the same orientation as the reference image. I never knew that matrices could be so useful, even though they appear everywhere——I had never had as hands-on of an experience with the usefulness of linear algebra until now.