This portion is straightforward. The hardest part was not the implementation of the code, but rather the considerations I had about whether to include the corners, which points to select to define a person's face, and some general image alignment and image choosing details. However, none of them particularly warrant an incredibly in-depth explanation (as they are mostly just sheer trial and error). I did ultimately decide that the corners were necessary: they just made working with the images so much easier and allowed me to morph parts of the background for a more realistic morph. I've included the original photos (one of me, one of George from the spec), as well as the correspondence points, labeled in order, on George's face.
In conclusion, I felt that there are characteristics that dominated across the various datasets included, as well as what we as humans fixate on. These are reasonably self-explanatory. The main trouble I had was getting different hair styles to really work together, but I think that just is not easily fixed through the methods employed in this project.
The midway face is realistically just a morph with 0.5 passed in for both the warp and dissolve. The computation of the average shape and the corresponding triangularization structure is straightforward and provided through the Delaunay function. I've included below the warped version of my face, the warped version of George's face, and the midway face.
As you can see, there is some reasonably horrifying distortion on both of our faces, but they combine reasonably well. I believe this due to the inherent different physical structure of our faces: some features that are more prominent on George's face are less noticable on mine, and vice versa.
The morph sequence becomes a simply task after implementing all the code for the midway face. The only difference is I must now decide how to implement the transition. This ended up being easily achievable by just iterating through a series of timesteps defined in a way so that I have enough frames to create a gif. I ultimately chose 15 frames and mashed them into the gif you see below.
There was an interesting question posed as to whether I should have a different warp and dissolve fraction, but after experimentation I couldn't find any systematic way to calculate them separately, and keeping them the same ended up working just fine.
I chose the FEI dataset, and specifically retrieved the front-facing, smiling images and their annotated shapes. Computing the average shape merely amounted to finding the average across each of the 46 points. To enhance the visual quality of my next steps, however, I added in four additional corner points that made the resulting images much more interesting to look at (and not appear super cropped or edged).
Here are some examples of faces from the dataset morphed into the mean face shape:
Honestly, these images kind of scare me.
Included below is the mean face. It's quite blurry, as I expected, but as one can see, the averaging has also removed deviations from the norm and generated a pretty symmetrical face with no particular blemishes.
I've also included my face morphed to the shape of the average, and the average face morphed to the shape of my face. This is also horrifying, presumably for similar reasons as why my morph with George was horrifying.
It would appear to me that the facial characteristics don't even align between these two images, and they're honestly more caricature-like than my actual generated caricatures for the next part.
Some pretty horrifying images can be created. Here is one (admittedly, less terrifying than the images in the prior part!)
I've created an image here that stretches my face out vertically. The method of producing this comes from extrapolating from the difference between my face shape and the average shape found earlier, and then emphasizing or de-emphasizing that difference.