In creating the face morph, I handpicked 50 corresponding points on each source image. The points were selected to be on the boundary of important features: 4 around each eye, 7 around the nose, 4 more around each eyebrow, 5 around each lip, some on each side of the hair, and the rest around the face. By encapsulating the important features, the morphing process appears more natural. For example, if I did not have points all around the eye, my eye would fade out and Steven's eye would fade in a few pixels lower. But with the wrapping, the eyes match up and move down smoothly.
Also to help with a smooth morph, a "Mid-way Face" is computed for each frame and is used as a target image. Then pixel intensities are sampled from both source images to color in the Mid-way Face.
For each frame, the weighted average of the two source's corresponding control points is calculated, based on the frame's location going from $t=[0,1]$. Then, a Delaunay triangulation is run on the middle control points to produce a triangle mesh. From those triangles, the affine transforms going from the frame triangles to each source triangles can be calculated. Note that the transform is from the frame to each source and not the other way around, as this prevents holes from appearing in the frame. This is no guarantee that a transform from the source to the frame is invertible, and that all pixels in the frame will end up being colored. Using the transforms, the pixels in each triangle in the frame transformed to their corresponding locations in each source image. The pixel intensities are sampled from each source image and are weighted-averaged to become a pixel in the frame. Also note that frame pixel may not correspond to a integer location, an exact pixel, in the source images. So, each sample does a bilinear interpolation on the four nearest pixels to the sample location.
Source 1 | Source 2 | Mid-way Face |
---|---|---|
This section relies on the images and point annotations provided by the FEI Face Database. I chose this dataset because it had a good mix of males and females, ages, shapes, and ethnicities.
First, I separated the population into a male set and a female set. Next, I computed the average location of all control points for each set. Then I warped all source images to the average geometry, using a similar process to the one used in part 1. After warping, I averaged all images together to create the "Mean Face" of each population.
Below are also some source images that have been warped to the mean geometry. Notice the somewhat unnatural changes to the eyes, lips, and hair.
Male | Female |
---|---|
Source | |||
Warped to Mean |
Here, I cropped my face to be aligned with the male dataset. This is the result of warping my face to the average male geometry, and the result of warping the average male face to my geometry.
By extrapolating from the mean, I can create a "more male" caricature of myself. Mostly my eyebrows are just thicker.
My face cropped | My face warped into average geometry | Average face warped into my geometry | Male average face |
---|---|---|---|
My face | Caricature |
---|---|
Using the female dataset and dataset average, some features of my face can be changed to be more feminine. This process is also similar to extrapolating from the mean as before, but now it is using a different category, i.e. female. My eyebrows, lips, and nose are noticeable thinner.
My face | Appearance Morph | Shape Morph | Output Image |
---|---|---|---|