This project asked us to created colored images given three glass plate black and white photos taken with red, green, and blue filters. The goal was to have as few visual artifacts as possible - this boiled down to ensuring that the three images were as aligned as possible before being layered on one another.
I used the template code provided, which meant that most of my work went into aligning the images together. For this, I did an exhaustive alignment of my green photo onto my blue, and then my red onto my blue. This was done by shifting each photo in the x and y directions by a certain number of pixels and then using an L2 norm to determine how "matched" the two images were.
For larger images, I used an image pyramid of height 4 for all but one of my images. At each level, I downsampled my images by a factor of 1/2, recursively ran my image pyramid aligment procedure, and then used my naive (non-image pyramid) procedure after shifting my images according to the recursive call. For performance reasons, I also reduced the search radius at each level - at the bottom of my pyramid, I searched 20 pixels in each direction, and at the top I searched 8 pixels, linearly interpolating my search radius for each level.
I ran into an issue with the photo of Emir - I believe that at lower sampling resolutions, the wide range of brightness across the three channels confused the L2 norm heuristic I was using (I did not normalize across my three channels). I fixed this by just reducing the depth of my image pyramid to keep some more detail, at the cost of performance.