By Bernard Zhao for CS194-26 Spring 2020
All code can be found in P5.ipynb
These photos were shot from the same spot, just a short walk from my house. I shot enough for a whole 180 degree view, but stitched them together non-cylindrically is too big.
I handpicked 6 correspondences in Photoshop, and then wrote them down in the notebook. I then used the below formulation and solved using least squares to obtain the homography matrix (after reshaping and adding 1).
Check out computeH
to see the implementation.
I used an inverse warp to project im1
onto im2
:
Check out warpImage
to see the implementation.
The warping works perfectly for rectification as well. Check out my quarantine setup where I was doing this project:
Now you can (sorta) see my screen from my original perspective!
Now using an alpha channel, we can put im2
and im1_warped
together:
This was a surprisingly tough project because of how tedious offset calculation became. Hopefully this makes the second part easier. I've gotten very familiar with numpy
array indexing, which is honestly some magic as well. It was fun to play with my camera manually tweaking the settings. It also was interesting to see the effects of my lens hood in the images, as the imperfect blending makes the difference in lens flare very obvious.
Since my images had such a high resolution, I had to tweak the min_distance=20
in the starter code to end up with arond 10,000 points:
Then, using ANMS, I brought this many points down to 500, all nicely spread out. I used a c_robust
value of 0.9.
Then for each of the 500 points, I sampled the 40 by 40 grid surrounding them, then rescaling it down to an 8 by 8:
Then using those feature descriptors, we can can match them to those with the smallest distance, but only if the ratio between the smallest and second smallest distance is less than our threshold 0.3
.
You can already see the points lining up correspondingly in this step, overlaying each other in the general area of the images. However, you can also see some points that don't have a match in the other image.
To fix that, we run RANSAC, which I performed 100 iterations of, using a threshold of 0.5
These points all match nicely.
Auto: By Hand: Auto: By Hand: Auto: By Hand:
I learned that implementing a research paper (albiet an easier version) isn't nearly as intimidating as I thought!