Here, I shot photos in multiple perspectives, but remaining in the same position while just turning the camera to face different directions. I also ensured that there were some linear objects in order to make choosing points easier, as well as ensuring that the photos overlapped quite a bit.

In this section, I recovered the parameters of the transformation between each pair of images. In our case, the transformation is a homography: p’=Hp, where H is a 3x3 matrix with 8 degrees of freedom (lower right corner is a scaling factor and is set to 1). I used least squares to recover the homographies.

Using the inverse of the homography matrix I calculated, I created a warp function to warp the points into the same perspective so it can be later blended. Here, I used cv2.remap to remap the images to new warped locations. For the mosaic specifically, I remapped the side images to match the center one and also allowed for excess height and width to account for the fact that the warping may cause the dimensions to change (height * 1.5, width * 3).

As an example, I took pictures of tilted and objects at different perspectives and passed in straight points into my warp function in order to rectify the images below. (The examples aren't perfect since the points I passed in aren't exact in dimension)

Finally, I used an alpha value of 0.5 to blend the warped images together and create a final mosaic.

I think the coolest thing I learned was that computing a homography allows you to map the points in one image to the corresponding points in the other image. In this way, we can take pictures at different perspectives, but then warp them into the same perspective using the homography matrix.

Here, I used the given Harris Interest Point Detector method to find the various interest points in single scale.

In this section, I implemented adaptive non-maximal suppression (ANMS), which essentially allows us to retains the most relevant points

Using the ANMS points I calculated, I implemented a feature dsecriptor extractor, meaning extracting a description of the local image strucutre that will support reliable and efficient matching of features across images. In this case, I used a normalized equation and dist2 to match the points on two different pictures.

RANSAC was used to apply geometric contraints and reject remaining outliers. RANSAC is completed by randomly selecting a random sample of data, then the points are tested against a fitted model. If the model is reasonably good, meaning it has enough inlier points, then the points are returned. If not, the model is improved by resestimating using all the points that fit the estimated model.

The final mosaics are created by warping the the images based on the computed homography matrix from the RANSAC points. Then these warped images are stitched together, just like in Part A! I additionally added a border to the X-range of the photo, in order to improve on part A and ensure that most of the picture is incorporated in the final result.

In the end, my autostitch worked better than the one where I manually chose points. This is expected, since my finger is probably not super accurate and clicking points at the exact same spot every time! Computer beats human again:)

The coolest thing I learned in this project was probably how to use the RANSAC algorithm to narrow down the significant points and remove outliers. Without this step, my mosaic definetely did not look as clean. I found it interesting that it was possible with just some simple sampling techniques and fitting it against a model in order to get a robust result.