In this project, we did many things. We first, defined correspondence points for two images and then used these points to construct a Delaunay triangulation of the mean of the correspondence points. Then, using this mean triangulation to align our two images by giving them a common shape, we created a midway face morph of the two images by cross-dissolving into this common shape. Building off of this idea even further, we then created a series of equally spaced face morphs between the two images. These images were then used to create an animated .gif file showing how one of these images could transform into the other. We then downloaded several pre correspondence annotated faces and used them all to create a mean face for the male faces in the set. Using this mean face, we warped a chosen face image into the mean face's shape and vice-versa. Then, we took the difference between our chosen face image's shape and the mean face's shape and used this to create a caricaturized version of our chosen face image's shape by warping it into this newly defined shape. Lastly, using the two different male population averages, we altered the ethnicity of our chosen image's face by using a series of vector transformations on our appearance and shape data.
In this section we create correspondence points on both of our images of Daniel Craig and Ryan Reynolds, making certain that we label the corresponding points in the same order in both images. Moreover, we also label the corners of both images in order to help reduce the ghosting effect of morphing the images. We then morph the images mid-way, giving us a resulting image that looks like a hybrid of Daniel Craig and Ryan Reynolds. This is done by defining a Delaunay triangulation of the mean of both images' correspondence points and also defining a function that computes the affine transformation matrix between corresponding triangles. Using the same triangulation for both images, we use this triangulation as a universal shape for both images and loop through all pairs of triangles in our mid-way image and source images. We then use our affine transformation matrix to warp each pixel in our resulting images' triangles back to their source images' pixels and combine the color data via a cross-dissolve.