The primary goal of this project is to explore how we can exploit image transformations to correct and warp between different perspectives of the same object/scene. Starting with separate images of the same scene from different viewing angles, we use homographies to stitch together a mosaic through projection.
Before we can do anything, we need pictures that are relatively consistent in lighting, content, and camera position (although viewing directions vary). Here are some examples of Durant Ave., near Asian Ghetto taken from my balcony.
This step involves solving the equation seen in class using least squares (generalized case for when there's more than 4 points) using projected points that were defined earlier.
For my mosaics, here are the compute homography matrices (after hand-selecting correspondence points)
To warp, we can use the computed homography matrix for an image and transform the defined correspondence points (4 corners in my case, for each of the examples) and then interpolate between them to recover the rest of the pixel values in the new space by using inverse warping.
Image RectificationNow that we have a method to warp images using the homographies computed earlier, we can use our points from each of the images to define a new canvas on which we can place the images by computing the shifts necessary. Now, all that's left to do is fill in values for each pixel of the new canvas with the corresponding image, with some linear blending at the seams!
Durant Ave. MosaicI learned many things from this project, but the one thing that I found the coolest was how we A) find a transformation between two perspectives (homography), and B) how we can use it to project images into the same plane and ultimately change perspective! Playing around with different warping, blending, and correspondences was very educational as well, giving me better intuition of how images and cameras work and how we percieve vision. Fun project!