The goal of this project is to restore images from the Prokudin-Gorskii image collection to their original colored versions. We are given three versions of the image taken using RGB filters.
The original image is first divided into 3 equal parts based on height to get the three separate RGB filtered images. Then the green and red images are independently aligned to the blue image using the blue image as the reference. The alignment is done by looping through a range of possible displacements on both the x,y axes. I use a range of [-15, 15] to loop over. For the smaller images, the borders were causing a lot of problems with the SSD metric, which is why I cropped a portion of the image to calculate the displacement works. For the high resolution images, by using the pyramid rescale algorithm with a scale of 2 and 5 layers I was able to achieve consistent and fast (under 1 minute) results. Some things I added to speed it up were - caching, cropping the image.
Below are the results of the algorithm on all the given images.
GtoB[5, 2] RtoB[12, 3]
GtoB[-3, 2] RtoB[3, 2]
GtoB[3, 3] RtoB[6, 3]
GtoB[33, 2] RtoB[98, 5]
GtoB[48, 24] RtoB[-193, -33]
]GtoB[59, 17] RtoB[124, 15]
GtoB[41, 17] RtoB[90, 23]
GtoB[51, 8] RtoB[113, 11]
GtoB[81, 9] RtoB[182, 13]
GtoB[50, 27] RtoB[108, 37]
GtoB[78, 29] RtoB[175, 37]
GtoB[50, 15] RtoB[110, 13]
GtoB[41, 6] RtoB[85, 33]
GtoB[53, -1] RtoB[105, -12]
GtoB[-15, 10] RtoB[11, 19]
GtoB[62, 15] RtoB[137, 16]
For this image, the red channel seemed to be misaligned by a lot. In my image it is shifted to the top left. The difference in brightness between the channels, which causes the alignment algorithm to pick the wrong optimal displacement if the three channels aren’t consistent. Perhaps by changing the contrast of the channel, we can make this image work.
By using Roberts algorithm for the Emir Image, so instead of using the channel values as filters, using Edge detection we are able to get a better image. Other edge detections like Sobel don't work as well as Roberts.