into this
Our Goal
Shooting the Pictures
First of all to create this mosaic as above we need a fundamental assumptions for those images taken
All images should be taken from one position in space with different angles
We use this because when you consider Plenoptic function we can project one view to another with homological transformations only when the position of the center is not changed.
Recover homographyies
Over here is how we define the homographies. 8 degrees of freedom, 8 unknown variables. Suppose we have n points from image 1 and corresponding n points from another. That mean the equation has to satisfy the equation above for each point or be very close. If we would write down all the equations from each point, unless we have less or equal than 4 points, we would get a lot more equations than variables. One best way is to form the Least squares problem setting as shown below
Solving the least squares problem would give the best estimation for transformation matrix.
Warping Images - Image Rectification
The image warping is very similar to what we have done in the previous project with interpolation. Using this we can try to get the effect that we rotating our eyes in the image
Initial image || turning right
One good application of it is retrieval of small parts of the image that is hard to get in the initial orientation of the camera but a lot easier to see with other view - like in this images below:
Initial image || rectified
Mosaic
The last part of this project is how to create panorama with several images with different rotation
- My first step is to create a huge canvas to fit all the images
- We put initial image in the center
- Let's take image to the right. We can define some correspondences between our central image and the current one. Using them we can define both the current_to_canvas and canvas_to_current homographies
- Using current_to_canvas transformation get the edges of the next projection
- Using canvas_to_current transformation and interpolation get the pixel values of each
- We can project it on the new canvas with put the previous stuff in there to get two images
- Now we can blend them but for blending I used sigmoid blending - just using the same sigmoid function in the alpha channel in each row. I set it so that when sigmoids are equal to 0.5 at the average x coordinate on all matching points so the divide is in between two images
Now we can blend them using this sigmid mask as an alpha channel
- Now we can repeat for other images but now we map correspondences for the third image and the second image that is already on the canvas. And final results are below:
||||
||||
||||
Now let's take it to be totally automatic!!!
Basically our goal is to find corespondence points in two images. One starting point would be harris corners because if one object is in two images that means it has the same corners. For harris corners we try to estimate how much a patch around a point changes if we move it around. It can be estimated by eigenvalues of the following matrix
If both its eigenvalues are big that means there is big chnage in all directions and that what we need
We can actually estimate the corner strength by the formula
Here are two results of that
Adaptive Non Maximum Supression
But I am getting feeling that it is quite crouded what if we could get more evenly distributed set of points. We can, with Adaptive Non Maximum supression where we basically sort all the harris points by how far the point A for example is from the closest points which is 1.111 bigeer than that point A. And at the end we just pick up the best 500 for our nms output
Feature Matching
In this step we need to match images and to do that we get 40*40 patches around each point and then resmaple to 8*8, blur it, and perform bias/gain normalization
|||||
Then somehow we need to set corespondences between two images right? We can do Lowe thresholding where we threshold matches based on how much the best match is better than the second for each patch in first set. The examples above are examples of those type of patches.Here is the corespondences from two images:
RANSAC
As you can see there are still some outliers. One very simple and at the same time powerfull algorithm for getting rid of the outliers is ransac algorithm given that we know that two images are related by homography. What we do here is we pick four matches randomly and calculate the homogrphy between those. Then we translate all the point by the calculated homography and pick the matches that agree with that translation. This way we ca maintain the set of inliers and that would be our corespondences
Here we go we are done we got the coresspondences for the homographies
Here is some exmaples of automated panoramas
||||
||||
||||