In [48]:
import numpy as np
import matplotlib.pyplot as plt
import skimage.io as io
import skimage as sk
import matplotlib.image as mpimg
In [49]:
from IPython.core.display import HTML
HTML("""
<style>

div.cell { /* Tunes the space between cells */
margin-top:1em;
margin-bottom:1em;
}

div.text_cell_render h1 { /* Main titles bigger, centered */
font-size: 2.2em;
line-height:0.9em;
}

div.text_cell_render h2 { /*  Parts names nearer from text */
margin-bottom: -0.4em;
}


div.text_cell_render { /* Customize text cells */
font-family: 'Georgia';
font-size:1.2em;
line-height:1.4em;
padding-left:3em;
padding-right:3em;
}

.output_png {
    display: table-cell;
    text-align: center;
    vertical-align: middle;
}

</style>

<script>
code_show=true; 
function code_toggle() {
 if (code_show){
 $('div.input').hide();
 } else {
 $('div.input').show();
 }
 code_show = !code_show
} 
$( document ).ready(code_toggle);
</script>
The raw code for this IPython notebook is by default hidden for easier reading.
To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.

""")

#Trebuchet MS
Out[49]:
The raw code for this IPython notebook is by default hidden for easier reading. To toggle on/off the raw code, click here.

Project 5A

The goal of this part is to get image mosaicing. I take SOME photographs and create an image mosaic by registering, projective warping, resampling, and compositing them. Along the way, i learn how to compute homographies, and how to use them to warp images.

1. Shoot the Pictures

I took three pictures of my kitchen from the same point of view but with different view directions

In [50]:
im1=io.imread("im1.jpg")
im2=io.imread("im2.jpg")
im3=io.imread("im3.jpg")

f = plt.figure(figsize=(20,20))
ax1 = f.add_subplot(1,3,1)
ax1.imshow(im1)
plt.axis("off")
ax2 = f.add_subplot(1,3,2)
ax2.imshow(im2)
plt.axis("off")
ax3 = f.add_subplot(1,3,3)
ax3.imshow(im3)
plt.axis("off")

plt.show()

2. Recover Homographies

In this part, we write a function of the form H = computeH(im1_pts,im2_pts) to compute the transformation function H. H is a 3*3 matrix with 8 freedom. The computation is based on:

In [51]:
im=io.imread("diagram.png")
plt.imshow(im)
plt.axis("off")
plt.show()

Weset i to be 1 to get rid of an arbitrary scaling. We strech the matrix H into a vector with 8 component and re-write the function into a linear function. By doing so, we turn this problem into a least square problem.

3. Warp the images and Rectification

In this part, we are able to rectify an image. We first select four points on the image. We assume these points are mapped to the vertex of a rectangle/square, e.g. (0,0), (0,100), (100,100), (100,9). Then we compute the matric H with the four point pairs. We write a function in the form of imwarped = warpImage(im,H) to warp an image based on the transformation H computed from the last step. We do this by inverse warping.

We show some examples here:

In [52]:
im1 = io.imread("blding.jpg")
im2 = io.imread("rec_blding.png")
f = plt.figure(figsize=(20,20))
ax1 = f.add_subplot(1,2,1)
ax1.imshow(im1)
plt.axis("off")

ax2 = f.add_subplot(1,2,2)
ax2.imshow(im2)
plt.axis("off")
plt.show()
In [53]:
im1 = io.imread("tile.jpg")
im2 = io.imread("rec_tile.png")
f = plt.figure(figsize=(20,20))
ax1 = f.add_subplot(1,2,1)
ax1.imshow(im1)
plt.axis("off")

ax2 = f.add_subplot(1,2,2)
ax2.imshow(im2)
plt.axis("off")
plt.show()

Part 4: Blend images into a mosaic

In this part, we warp the images so they're registered and create an image mosaic. We blend images by Laplacian pyramid using the function we wrote in project 2. We show some examples here:

The original images are:

In [54]:
im1=io.imread("im1.jpg")
im2=io.imread("im2.jpg")
im3=io.imread("im3.jpg")

f = plt.figure(figsize=(20,20))
ax1 = f.add_subplot(1,3,1)
ax1.imshow(im1)
plt.axis("off")
ax2 = f.add_subplot(1,3,2)
ax2.imshow(im2)
plt.axis("off")
ax3 = f.add_subplot(1,3,3)
ax3.imshow(im3)
plt.axis("off")

plt.show()

Blend images into a mosaic (there are some other mosaics by mannually stitching, please found in gallery):

In [55]:
im1=io.imread("house_manual.jpg")
f = plt.figure(figsize=(20,20))
ax1 = f.add_subplot(1,1,1)
ax1.imshow(im1)
plt.axis("off")

plt.show()

What I learn:

In this project, I learn a lot on image rectification and mosaic. It helps me recall how to blend two images which is learned in project 2. It also helps me recall what we learned about image warping in project3. Really fun with this project!

Project 5B

In this project, we create a system for automatically stitching images into a mosaic.

Step1: detecting corner features in an image

We use Harris Interest Point Detector to detect corners in an image. We directly used the function in the sample code. As an example, we show here the feature points extracted by the detector from two images:

In [56]:
im1=io.imread("corner.png")
f = plt.figure(figsize=(20,20))
ax1 = f.add_subplot(1,1,1)
ax1.imshow(im1)
plt.axis("off")

plt.show()

Step2: Implement Adaptive Non-Maximal Suppression

In this part, we implemented Adaptive Non-Maximal Suppression (ANMS) to select a fixed number of interest points from each image. The Harris detector has some local non-maximum suppression: no more than one feature will exist in any 3*3 window. But we want fine-grained control over the number of points returned, and we want to ensure we have a spatially diverse set of feature points. To get this, ANMS algorithm can help. This works by computing the suppression radius for each feature (strongest overall has radius of infinity), which is the smallest distance to another point that is significantly stronger (based on a robustness parameter which we choose to be 0.9). After the strongest point, every other feature will have a non-negative radius at most as big as its distance from the strongest. We can then sort our features based on radius size, and pull the first n features when we request a specific amount. In doing this, we aren't guaranteed to get the n features with the highest corner strengths, but instead, we get the n most dominant in their region, which ensures we get spatially distributed strong points.

We used this algorithm and select 500 interest points from the above images.

In [57]:
im1=io.imread("ANMS.png")
f = plt.figure(figsize=(20,20))
ax1 = f.add_subplot(1,1,1)
ax1.imshow(im1)
plt.axis("off")

plt.show()

Step3: Implement Feature Descriptor extraction

In this part, we implemented Feature Descriptor extraction. We sampled a 88 patch sampled from a 4040 window around each interest point. We also did a Bell and Whistle -- Rotation Invariance. We firstly calculated the gradient angle of each interest point and rotate the image about this interest point by this angle in the opposite direction. Then we extracted an axis-aligned 8x8 patch over the point. By doing this, we roughly got the matched features:

In [58]:
im1=io.imread("feature_matching.png")
f = plt.figure(figsize=(20,20))
ax1 = f.add_subplot(1,1,1)
ax1.imshow(im1)
plt.axis("off")

plt.show()

Step4: RANSAC

In this part, we implemented RANSAC to accurately match each feature pair. The algorithm is based on:

  1. Select four feature pairs (at random).
  2. Compute homography H (exact).
  3. Compute inliers where $SSD(p_i’, H p_i) < ε$
  4. Loop over 1,2,3. Keep largest set of inliers.
  5. Re-compute least-squares H estimate on all of the inliers

By doing this, we got a more accurate set of matched points:

In [59]:
im1=io.imread("RANSAC.png")
f = plt.figure(figsize=(20,20))
ax1 = f.add_subplot(1,1,1)
ax1.imshow(im1)
plt.axis("off")

plt.show()

Step5: Mosaic

In this part, we stitch two images together into a mosaic. We firtsly use step1-4 to automatically find the matched point paris in the two images. And we compute the least-squares H estimate on all of the matched points. We transform one image with the transformation H (inverse transformation). And lastly, we stitch the two images together like what we did in project2 and project5A. Here we show the result:

In [60]:
im1=io.imread("mosaic.png")
f = plt.figure(figsize=(20,20))
ax1 = f.add_subplot(1,1,1)
ax1.imshow(im1)
plt.axis("off")

plt.show()

Some manually and automatically stitched results are shown here:

In [61]:
im1=io.imread("house_manual.jpg")
im2=io.imread("house.jpg")

f = plt.figure(figsize=(25,25))
ax1 = f.add_subplot(1,2,1)
ax1.imshow(im1)
ax1.set_title("Manuallt stitched result", fontsize=25)
plt.axis("off")
ax2 = f.add_subplot(1,2,2)
ax2.imshow(im2)
ax2.set_title("Auto stitched result", fontsize=25)
plt.axis("off")

plt.show()
In [62]:
im1=io.imread("clake_manual.jpg")
im2=io.imread("clake.jpg")

f = plt.figure(figsize=(25,25))
ax1 = f.add_subplot(1,2,1)
ax1.imshow(im1)
ax1.set_title("Manuallt stitched result", fontsize=25)
plt.axis("off")
ax2 = f.add_subplot(1,2,2)
ax2.imshow(im2)
ax2.set_title("Auto stitched result", fontsize=25)
plt.axis("off")

plt.show()
In [63]:
im1=io.imread("city_manual.jpg")
im2=io.imread("city.jpg")

f = plt.figure(figsize=(25,25))
ax1 = f.add_subplot(1,2,1)
ax1.imshow(im1)
ax1.set_title("Manuallt stitched result", fontsize=25)
plt.axis("off")
ax2 = f.add_subplot(1,2,2)
ax2.imshow(im2)
ax2.set_title("Auto stitched result", fontsize=25)
plt.axis("off")

plt.show()

What I learn:

In this project, I learned how to stitch images into a mosaic. I learned how to automatically detect features and match features from images at different perspective. It's really fun to implement an algorithm following a paper!

In [ ]: