{\rtf1\ansi\ansicpg1252\cocoartf1671\cocoasubrtf600 {\fonttbl\f0\fnil\fcharset0 Menlo-Regular;} {\colortbl;\red255\green255\blue255;\red124\green128\blue145;} {\*\expandedcolortbl;;\cssrgb\c56078\c57647\c63529;} \margl1440\margr1440\vieww10800\viewh8400\viewkind0 \deftab720 \pard\pardeftab720\sl360\partightenfactor0 \f0\fs24 \cf2 \expnd0\expndtw0\kerning0 \outl0\strokewidth0 \strokec2

Project 5 Saurav Shroff

\

Part A

\

1. Shoot and digitize pictures

\

Here is a sample of some photos I took:

\

wall1

\

wall-SAVETEMP

\

2. Recover homographies

\

Here I used np.linalg.lstsq to approximate a solution for a system of equations using manually marked correspondences.

\

3. Warp/Rectify the images

\

This was basically a matter of inverse warping a "bad" image to the known shape of a good image. Here are some examples:

\

Original1:

\

wall1

\

Rectified1:

\

wall-blend

\

Original2:

\

Screen-Shot-2020-11-25-at-6-21-20-PM

\

Rectified2:

\

Screen-Shot-2020-11-25-at-6-19-57-PM

\

Blend Images into a Mosaic

\

Originals:

\

Screen-Shot-2020-11-25-at-6-21-41-PM

\

Screen-Shot-2020-11-25-at-6-21-20-PM

\

Mosaic:\ Screen-Shot-2020-11-25-at-6-05-10-PM

\

Originals:

\

Screen-Shot-2020-11-25-at-6-55-15-PM\ Screen-Shot-2020-11-25-at-6-55-06-PM

\

Mosaic:\ Screen-Shot-2020-11-25-at-6-35-32-PM

\

Originals:

\

Screen-Shot-2020-11-25-at-6-57-41-PM\ Screen-Shot-2020-11-25-at-6-57-46-PM

\

Mosaic:

\

Screen-Shot-2020-11-25-at-6-41-18-PM

\

I included this last one to show the limits of the homography - there is only so much information about non planar surfaces that our 8 variables can capture. Even then, however, the apparent location of different items is surprisingly accurate (though some of the items further in the background appear SUPER large).

\

What I learned

\

The biggest thing I learned was that there is a lot of information inside an image that can be recovered without any overly complex neural networks or deep learning mechanisms.

\

Part B

\

1. Detecting corner features in an image

\

To reduce the total number of corners I made a slight modification to harris.py; instead of peak_local_max(), I used corner_peaks(), both of which are skimage.feature functions.

\

This made it possible for my dual core macbook to process ANMS in a few seconds instead of minutes.

\

Here's an image with harris corners overlaid:

\

Harris-small

\

Here is the same image with ANMS corners overlaid (top 50 instead of top 500):

\

anms-small

\

Note that this mechanism could easily be applied in order to more points to recover the top 500 ANMS points instead of the top 50 (my function retrieves the top N where N is a paramater)

\

2. Extracting a Feature Descriptor for each feature point

\

This was pretty simple given that none of the points were near the edge of the image (because the harris corner detector ignores the edges).

\

Look at my code README for a description of the implementation.

\

3. Matching these feature descriptors between two images

\

Here is an example of matched points across two images. This was done with a very low threshold (0.15) to show that the points are indeed matching with corresponding parts of the image. In practice this is done with a higher threshold to get more matching points and a more accurate homography.

\

Screen-Shot-2020-11-25-at-9-29-31-PM\ Screen-Shot-2020-11-25-at-9-30-13-PM

\

RANSAC

\

Sad times :(

\

What I learned

\

Despite not getting the RANSAC algorithm down, the coolest thing I learned was that a small number of human marked points can sometimes outperform algorithms like our feature matching algorithm (hence the need for RANSAC), which speaks to how fault tolerant out brains are when it comes to image processing.

\ }