The goal of this project was to take images taken by Prokudin-Gorskii and convert them from three separate images with blue/red/green filters into one single, colorized image. Simply stacking them on top of each other wouldn't work since they were not perfectly aligned.

For a simple naive approach, I would crop out a quarter on each side of the pictures in order to remove the noisy edges. In addition, I would iterate over all possible x,y displacements over an absolute value range equal to a fifth of the width/height of the pictures. After looping through all possible displacements, I would choose the one with the smallest sum of squared differences (SSD) to adjust the picture by. I would do this for the red and green pictures while keeping the blue one as the base. An initial problem I had was just generating an output that made any difference to the initial images. The issue ended up being a mistake in my SSD function.

For the pyramid approach, I would essentially reuse the original naive approach in a recursive manner. I would crop the images first, and set the intial displacement values to 0. Depending on the number of levels I want to set (currently 5). I would downscale the images exponentially by 2 (2^level). In each recursive level, I would search in a displacement range within a distance of 2^level from the current displacement values, finding the values with the lowest SSD. With each recursive level, I would scale up the displacement values by 2. This is done until the level count reaches 0, upon which the displacement values are returned. My biggest problem was similar to the problem I had in my naive implementation: getting any noticeable differences. I ended up putting a bunch of print statements where I was calculating offsets and noticed that I was accidently resetting them to zero in every recursive step. After overcoming this struggle, I noticed that not every image seemed to be perfectly aligned (same thing with the naive implementation). To address this, I decided to only compare the middle portions of the images like Professor Efros had recommended.

Results on example images

Offsets go as follows: green x offset, green y offset, red x offset, red y offset

offset: 24 48 -472 352
offset: 16 60 12 124
offset: 2 34 4 98
offset: 2 5 3 12
offset: 16 42 22 90
offset: 6 54 10 116
offset: 8 84 12 180
offset: 2 -3 2 3
offset: 24 52 36 108
offset: 28 78 36 174
offset: 12 52 10 110
offset: 2 3 3 6
offset: 4 42 32 86
offset: -2 52 -12 104

Examples of my own choosing

offset: 0 30 -22 64
offset: 18 10 36 36
offset: 18 18 28 46


The only image that my algorithm couldn't work on was emir.tiff. This is likely due to the lighting differences between the red/green/blue images that caused this. Thus using the SSD metric to calculate differences was not an effective method.