I have a function that does this, look at my code
Here are a couple of examples of images warped to new coordinates. This is when you take an image and shift it using a homography so that points of it align with selected points from another image. Below are two images, and one of them warped to the other
Look at that beauty. Doing this required us to recover the homography (by choosing 4+ points of correspondences in each image, and running as least squares solver) and then doing some fancy looping over the images
This is my room
This is my TV
Instead of just warping images, here we can specifically warp to make it such that an arbitrary quadrilateral in the source image is now a rectangle. Here are a couple examples
Look at that beauty. Doing this required us to recover the homography (by choosing 4+ points of correspondences in each image, and running as least squares solver) and then doing some fancy looping over the images
Here are 3 examples of images stitched together into a mosaic
Look at that beauty. Doing this required us to recover the homography (by choosing 4+ points of correspondences in each image, and running as least squares solver) and then doing some fancy looping over the images
This is my room
This is my TV
This project was actually pretty cool, if a little painful. Learning how to get the homography to work properly was an interesting part. In particular I had an issue for a while which really taught me the difference between projective and affine transforms. Specifically, I new that with a project transform when you input a point [x, y, 1] you would get [x'w, y'w, w]. With an affine transform, you would be able to swap the order of coordinates and do something like [y, x, 1] -> [y'w, x'w, w]. This is NOT the case with a projective transform :D You have to ensure that you input points in the right order.......
I used the harris detection method to detect corners in images. This involved tuning the sigma size and radius size
Above are the source images of my room, and then the images with points drawn on top
Here are some images of my tv and then the images with points drawn on top
In this part of the project we had to filter out points in order to get corners that were strong, but also separated throughout the image
Above are the source images of my room, and then the images with asnsm points overlayed. Here you can see that there are fewer points and they are better spaced out. You can also note that it seems there are many points on each side which don't correspond to points on the other image. This unfortunately lead to issues further on in the project.
I implemented this part of the project, but wasn't able to get it fully working. As such, there aren't many visuals to show here or for the rest of the project. In the code, I wrote a method that iterated through every pair of points between images (essentially a bipartite graph), and for every point i in image 1 ranked all points j in image 2 by their similarity. I then used the dist2 method we had to compute the difference between all points, and found the top two similar scores, called NN1 and NN2. I then computed the ratio of NN1/NN2, and if the value was less than a defined threshold it was included.
Unfortunately it seems like the method was not robust enough, as there was a significant number of invalidly matched points, which caused problems later on
I implemented this part of the project. The output of the previous part had a bit too much noise, which caused RANSAC to fail to recover a valid homography. I believe the implemetation of RANSAC itself is valid, it was just the fact that most points weren't matched with exactly the right point on the other picture.
Below are a few of the failed attempts at automatic stitching images
This project was pretty hard. I learned about some cool algorithms and that indexing is horrible. I thought ANSM was pretty cool.