Image Warping and Mosaicing
Kevin Lin, klinime@berkeley.edu
Introduction
This is part A of the image mosaicing project, primarily focused on stitching different photos taken the same location into a seamless mosaic. We choose a base image and warp the others to the same plane as the base image, via appropriate homographies computed from human labeled keypoints. Part B of the project will automate keypoint labeling via feature detectors and feature descriptors.
Shoot the Pictures
The following are photos of my room that I will be stitching, with the middle as the base image:
Taken with an iPhone 11 without adjusting any fancy settings because I have no camera 😭.
Recover Homographies
We compute the homographies by least squares on a set of keypoints. Keypoints:
Some tricks to selecting good keypoints include
- Select corners
- Accuracy matters A LOT, any outlier will ruin the homography
- Many is good, assuming similar accuracies, so small margins of errors get averaged out
- Spreaded out is good, to make sure the homography retrieved is global
From the keypoints, we solve for the homography:
We want to perform least squares on the homography matrix, so we need to rewrite the equations. Taking advantage of gx + hy + 1 = 1, we can derive:
which is now in the form we can solve with least squares.
Image Rectification
With the homographies computed, we simply need to warp the pixels with the transformation. In general, we want to first define the pixel positions in our target image and interpolate the inverse warped pixels. Since homography preserves the convexity of our image, we can forward warp the corners (i.e. rectangle) to our target image (now a convex polygon), then define the pixels within the polygon as our target and inverse warp. Results of rectifying:
Blending
Before blending, I first padded the rectified images such that 1. they all have the same dimensions and 2. corresponding keypoints are at the same position. We also have the exact mask corresponding to our warped images, but since the masks are not mutually exclusive (rather cannot be or there will be no corresponding keypoints), we cannot directly blend in a similar fashion as our previous project of blending the “Oraple”.
Instead, we define a seam between two neighboring images as the pixels that are equidistant to the edges of the images and update our mask according to the seam, such that the masks become mutually exclusive. Then, we can blend according to a two layer laplacian stack, i.e. a “two band blending”. The stitched mosaic:
Conclusion
I was awed by the mosaic at first, but then small hiccups cought my eyes and I could not get the right set of keypoints to eliminate the hiccups. I hope part B can take care of that for me so I don’t need to deal with my butterfingers 😣. The “aha moment” when I figured out how to rewrite the projective transformation was very satisfying though 😊.
Image Warping and Mosaicing
Kevin Lin, klinime@berkeley.edu
Introduction
This is part A of the image mosaicing project, primarily focused on stitching different photos taken the same location into a seamless mosaic. We choose a base image and warp the others to the same plane as the base image, via appropriate homographies computed from human labeled keypoints. Part B of the project will automate keypoint labeling via feature detectors and feature descriptors.
Shoot the Pictures
The following are photos of my room that I will be stitching, with the middle as the base image:
Taken with an iPhone 11 without adjusting any fancy settings because I have no camera 😭.
Recover Homographies
We compute the homographies by least squares on a set of keypoints. Keypoints:
Some tricks to selecting good keypoints include
From the keypoints, we solve for the homography:
We want to perform least squares on the homography matrix, so we need to rewrite the equations. Taking advantage of gx + hy + 1 = 1, we can derive:
which is now in the form we can solve with least squares.
Image Rectification
With the homographies computed, we simply need to warp the pixels with the transformation. In general, we want to first define the pixel positions in our target image and interpolate the inverse warped pixels. Since homography preserves the convexity of our image, we can forward warp the corners (i.e. rectangle) to our target image (now a convex polygon), then define the pixels within the polygon as our target and inverse warp. Results of rectifying:
Blending
Before blending, I first padded the rectified images such that 1. they all have the same dimensions and 2. corresponding keypoints are at the same position. We also have the exact mask corresponding to our warped images, but since the masks are not mutually exclusive (rather cannot be or there will be no corresponding keypoints), we cannot directly blend in a similar fashion as our previous project of blending the “Oraple”.
Instead, we define a seam between two neighboring images as the pixels that are equidistant to the edges of the images and update our mask according to the seam, such that the masks become mutually exclusive. Then, we can blend according to a two layer laplacian stack, i.e. a “two band blending”. The stitched mosaic:
Conclusion
I was awed by the mosaic at first, but then small hiccups cought my eyes and I could not get the right set of keypoints to eliminate the hiccups. I hope part B can take care of that for me so I don’t need to deal with my butterfingers 😣. The “aha moment” when I figured out how to rewrite the projective transformation was very satisfying though 😊.