In this project, I took the digitized version of Prokudin-Gorskii's glass plate negatives capturing the Russian Empire of 1907, each with three images for the red, green, and blue color channels, and used image processing techniques to overlay them on top of each other and to produce the corresponding color image.
After I separated each glass plate negative heightwise into the 3 corresponding color channels, it was simply a matter of finding the pixel value offsets which most closely aligned the 3 images with each other. In my code, I first aligned the green channel with the blue channel, and then aligned the red channel with the blue channel. I hard-coded an offset of 15 pixels (for images with a width smaller than 400px) in any of the four cardinal directions, shifting the green and red channels respectively, and then calculating the similarity between the blue channel image and the shifted one. I used the sum of squared distances (SSD) on the images represented as a 2-dimensional array of doubles for my metric of image similarity. For images wider than 400px, I simply shrunk the image in half recursively until it was smaller than 400px, then took that offset, multiplied by 2, applied it to the larger image, and adjusted on a smaller scale (2 pixel offset). I also noticed the black borders on many of the separated channel images seemed to be interfering with the efficacy of the SSD metric, so I added in a step before aligning the images to crop the black borders surrounding images. This drastically helped my algorithm better determine which (x,y) shifts were most appropriate on each image. Once these offsets were calculated, it was simply a matter of stacking the 2 shfited images with the blue channel image and saving the resulting output to a unique filename for each image.
Positive offsets are to the right or up, negative offsets are to the left or down.
G[a,b] = green channel offset where a = horizontal offset, b = vertical offset
R[a,b] = red channel offset where a = horizontal offset, b = vertical offset
X[a,b] = x-direction crop where a = percentage cropped from left side, b = percentage cropped from right side
X[a,b] = y-direction crop where a = percentage cropped from top, b = percentage cropped from bottom
G[2, 5]
R[3, 12]
X[5, 5]
Y[3, 3]
G[16, 64]
R[16, 128]
X[5, 5]
Y[2, 2]
G[16, 48]
R[16, 96]
X[5, 5]
Y[2, 2]
G[16, 48]
R[16, 112]
X[5, 10]
Y[2, 2]
G[16, 80]
R[16, 176]
X[5, 10]
Y[3, 3]
G[2, -3]
R[2, 3]
X[5, 5]
Y[3, 3]
G[32, 48]
R[32, 112]
X[6, 6]
Y[2, 2]
G[32, 80]
R[32, 176]
X[8, 4]
Y[3, 3]
G[16, 48]
R[16, 112]
X[6, 6]
Y[2, 2]
G[2, 3]
R[3, 6]
X[4, 4]
Y[3, 3]
G[0, 48]
R[32, 96]
X[5, 5]
Y[3, 3]
G[0, 64]
R[16, 128]
X[5, 5]
Y[3, 3]
G[0, 48]
R[-16, 96]
X[5, 7]
Y[2, 2]
G[32, 48]
R[-224, 80]
X[5, 5]
Y[2, 4]
G[-1, 1]
R[-1, 13]
X[5, 5]
Y[2.5, 2.5]
G[0, 4]
R[0, 8]
X[4, 8]
Y[2, 2]
G[1, 7]
R[3, 15]
X[4, 4]
Y[3, 3]
G[1, 2]
R[0, 4]
X[5, 5]
Y[2, 2]