Face Morphing

James Fong (cs194-abd)

CS194-26: Image Manipulation and Computational Photography Spring 2020

Morphing between Person A and Person B

For Person A, I chose to use my own face. Here is a visualization of what my face looks like:

For Person B, I used the “George” image provided by the class:

Here is the resulting 45-frame animation at 30 fps:

Here is the midway face (\(t=0.5\)):

(Note, I tried to get my ears visible on the front profile image, but my ears just are very… flat it seems. I think I should wear better headphones.)

Population-specific means:

The following results use the FEI Face Database, which is linked to on the official project page.

I manually went through the population and divided the faces into “male” and “female” categories based on my best judgement. This resulted in \(97\) male and \(103\) female faces. I also made use of the provided “smiling” and “neutral” groups in the database.

We can view the result of taking male or female faces and warping their faces onto the average geometry:

Averaged Male Faces: Originals on top, Averaged on bottom

Averaged Female Faces: Originals on top, Averaged on bottom

Also, here are the resulting sub-population averages:

Smiling / Neutral Males

Smiling / Neutral Females

For comparison, here are the averages for all faces, male and female combined:

Smiling/Neutral Human Face

We can also warp my face onto the average male face:

Student face warped onto male-average face. Before: Left, After: Right

Male-average face warped onto student face. Before: Left, After: Right

Caricature: Extrapolating from the mean

To make a caricature, we can find my face’s deviation from the male face mean, and multiply that deviation by some fixed amount. Here, I amplify my distance from the male face mean by \(2\).

Bells and Whistles: Principal Component Analysis

For the following experiment, I use all of the faces in the same FEI Face Database. We treat every face’s control points as a vector in a \(\mathbb{R}^{100}\). (This is the result of flattening the vector of \(50\) 2D control points.) Then, we use standard PCA techniques to find the top \(11\) principal components. Each principal component is a direction in “face geometry” space. These are visualized as animations, where we loop between \(-3\) and \(3\) standard deviations away from the mean (average) face. I also tried to come up with a human-interpretable description for each PCA “face geometry” vector direction. Ordered in terms of highest variance to least variance, here are the top \(11\) principal components, visualized on the average face and on my own:

Face size, or, face distance.

Head pitch, or, weight gain

Head yaw

Cheek asymmetry

Face stature

Face length

Lip fullness

Chin size

Eye distance

Mouth size

Smile / Frown

We can use these controls to sculpt faces to our liking. For example, here we are \(3\) standard deviations along “head size” (component 1) and \(8\) standard deviations along “smile” (component 11):

As in the previous section, we can also find my face’s distance from the mean and then extrapolate to create a caricature. However, here we are working in “PCA space” defined by the above 11 vectors. That is, my deviation from the mean is only measured in terms of the above 11 controls. In my opinion, this makes a much-better looking caricature.

Caricature generated using PCA (left) vs. original basis (right)

The PCA caricature has a more “gentle” distortion. Since the caricature only has 11 degrees of freedom to work with, more emphasis is placed on important features and less emphasis is put on amplifying noise. In comparison, in the original basis, we see strange artifacts around the forehead.

Misc

HTML theme for pandoc found here: https://gist.github.com/killercup/5917178