In this part of the project, I captured multiple images about the same origin. The 3d projection of these images can be modeled as a 2d projective transform because the light is caputred at the same spot (or for large objects, this behavior is mimicked). Then, we can compute a homography that defines this transform. Using this transformation, we can adjust the perspective of images (called rectifying them):
In the above, we transformed the image using a homography defined by the correspondence to a rectangular shape parallel to the camera. We can also define this correspondences in terms of matches to features in another image. This, hopefully, projects the image to be parallel with the other image such that the two can be stitched together in a mosaic. In order to stitch together the images, I used laplacian blending. This worked better in some cases than others. Additionally, the overlap between the individual pictures could have been improved with better selection of the corespondences. I did attempt to improve the correspondences by calculating the SSD and moving around the points to improve the error. Below are the images captured followed by the stitched mosaic.