# Face Morphing

## CS 194-26: Computational Photography, Fall 2018, Project 4

### Zheng Shi, cs194-26-aad

Let the image of myself be im1, and the image of George Clooney be im2. I am going to generate a series of α*im1+(1-α)*im2 (α takes values from 0 to 1). These images will give us a smooth morphing from me to George Clooney. However, we can't just do this in spatial domain directly, or we will get ghost effect in the midway images. For midway images, we will first warp both images into a common shape, and then cross-dissolve them.
# Defining Correspondence

To annotate the image for correspondence, I used 42 points, including 4 for each eye, 3 for nose, 2 for each eyebrow, 5 for lips, and 4 for corners. They will be used for Delaunay triangulation.

# Computing the "Mid-way Face"

For the image in the middle of morphing, we can first compute the average of annotated points from two images to get the "average shape". Then we have the same triangulation for im1, im2, and the middle image. To warp im1 into the middle image, inverse warping is better than forward warping. I scan through every pixel in the generated image. For each pixel, I find which triangle it is in, and use the affine transformation for that triangle to find the corresponding location in im1. Since the calculated coordinate might not be integers, we need to interpolate to get the new pixel.

We do the same for each channel.

# The Morph Sequence

Here I generate 46 frames for the morphing sequence. Below are a sample of them. The image goes from George Clooney (first row first column), to a fair mixture (first row last column & second row last column), and to me (second row last column).

Combine all 46 frames to generate an animation.

### George to me!

# The "Mean face" of a population

For this section, I selected the IMM Face Database, in which there are 240 images from 40 people (6 images per person). The dataset has been annotated.

I compute the average for correponding points taken from all 240 images. This gives us the average face shape of the population. Then using similar techniques, we can warp every image in the dataset to the mean shape. After that, we can cross-dissolve warped images to get the mean of the population.

### Example of faces into the average shape

#### Original images

#### warped

#### Original images

#### warped

Doing the same thing for a subpopulation also leads to good results.
### Mean of all, male, female, normal expression, smile

### My face warped into average geometry

### Average face warped into my geometry

# Caricatures: Extrapolating from the mean

Now we know that interpolating images can give us a smooth morph. What about doing extrapolation? In another word, we can try to use α>1 when we do α * (image of me) + (1-α) * (image of population mean). It makes sense that the result will exaggerate the characteristics of my face.
However, it looks like the tone of my face becomes darker. My guess is that since a large portion of my image is in black, when I rescale the intensity back to (0, 1) before saving the image, it somehow makes the content in the middle darker.
# Bells and Whistles

## Morph to smile

### Just the shape

### Just the color

### Both

If we would like a image of me smiling instead, we need to do (image of me) - (average non-smiling face) + (average smiling face) in vector space. In that way, the characteristics of my face is kept, and the constructed image will be smiling as well.