The goal of this project is to stitch together many images to form larger composite images.
I took some pictures of the road between Barker Hall and Koshland Hall on campus and used those for warping. I also found some other image sets online and tested with those.
image 1 | image 2 |
---|---|
I calculated the honeography matrix H using the equations described in the textbook, Szeliski p.494-496. I used the equation Ah=b
, where b
is populated with the chosen coordinates on image 1, the one that other images are being warped into. b
is a column matrix of the form [x1, y1, x2, y2 ... xn, yn]
. A
is made up of a stack of
for each coordinate pair, where (x, y)
are coordinates from image 2 and (x', y')
are coordinates from image 1. I then use np.linalg.lstsq()
to calculate h
and construct the homeography matrix H
from that.
To warp image 2, I need to apply H
to the locations of its pixels. I first get the coordinates of the corners, warp them with H
, and create a polygon with skimage.draw.polygon()
using those coordinates. This polygon is the shape that image 2 will take once warped into image 1's point of view. I create a blank image of zeros and place pixels from the unwarped image 2 into the blank image using the corresponding coordinates from the warped polygon, and that is how I create the warped image.
Warping image 2 as described above gives a rectified image, with image 1 as a reference point.
image 2 | warped image 2 |
---|---|
TO blend the images, I created a mask with a sigmoid function that gradually changed from 1 to 0 at some specified threshold. This approach led to a blurring effect where similar features of the blending images didn't align exactly right and you can see the features from both the reference image and the warped image. Additionally, the warped image has large black borders around it, which get included in the blending and lead to some areas getting unnecessarily darkened. The resulting blended images are ultimately still recognizable, they just have extra visual effects.