This project produces a morph animation of my face into someone else's face. To achieve this, firstly I compute the mean of a population of faces and extrapolate from a population mean to create a caricature of yourself.
A morph is a simultaneous warp of the image shape and a cross-dissolve of the image colors. The cross-dissolve is the easy part; controlling and doing the warp is the hard part. The warp is controlled by defining a correspondence between the two pictures. The correspondence should map eyes to eyes, mouth to mouth, chin to chin, ears to ears, etc., to get the smoothest transformations possible.
Part 1: Defining Correspondences
The first step is to define pairs of corresponding points. There are total of 43 points are manually selected and are consistent labeled across the two photos. The most important facial feature, such as eye, lips, nose, are captured with more points to enhance accuracy. The manually selected points are shown below.
Then I averaged these two pointsets, and with my mid-way points, I used Delaunay triangulation to generate a triangulated pointset. The delauney triangulation is only run once, then I use the same topology for both Adrien's photo and my photo. Thus the two photos shares the same triangulation and will provide a mild deformation in the later stage. The result of Delauney triangulation is shown below.
Part 2: Mid-face computing
Mid-face computing includes the following three step:
1. computing the average shape (a.k.a the average of each keypoint location in the two faces)
2. warping both faces into that shape
3. averaging the colors together.
The warped mid-face for the two images, as well as the colour-dissolved images are shown are shown as follows. Notice that I didn't use interp2d function because it will cause 3 times slower, but rather, I used flooring to find the correspoinding pixels to avoid out-of-boundary error and also it is quicker than interp2d.
Part 3: The Morph Sequence
It is noticed that since Adrien and me are in different 'subset of the population', i.e., I have long hair and he has short, the intermediate morphing doesn't look good. It can be solved by adding more points around hair, make it look like 'growing' from one photo to another, this also happens for the eyebrow. Not sure if I have time for doing it again though...
Part 4: The "Mean face" of a population
The mean face is calculated using database generated from Michael et al. (2004), with annotated points specifying features. Total of 33 male faces with calm expression are used for averaging. The first step is to find the average face shape of the whole population. The averaged points plotted on the first image is shown below.
The next step is to morph all the faces' shape into the mean face's feature points. Some results are shown below.
Then the colour is dissolved among all shape-morphed images. The result of the mean face is shown below.
Then I mophed my face into the average face, and vice versa.
Part 5: Caricatures: Extrapolating from the mean
Rather than interpolating between average face and my face, I started to exterpolating my face further away from the mean face, thus the characteristic in my face will be extravagated. The results from 1.0 (completely my face) to 1.8 (extrapolation, with -0.8 x mean face) is shown below. The head is more rotated and I have a asymmetry smile which is also extravagated. (and it's a little wierd horror-movie like.)
Part 6: Bells and Whistles
1. Male me and aging me
In this step, I use library Dlib's 68 Face Features [2] to automatic collect face feature points. The point selection is similar to my own approach, but I missed eyebrow, which is a innigligible feature to human face. I used my face point position - (male average points -female average points) to emphasize the male feature. The 68 Face Features is shown below.
I also tried to morph my face into a senio's shape. The male me and the aging me are shown below. Although it looks artificial and unreal, but the muscle's moving trend somehow gives a hint how my face will change in 30 years.
2. Aging me gif
The last thing I did was to morph my photos from different ages. I chose 5 photos, ranges from elementary school to graduate from graduate school. Between each 2 photos I create 45 frame to show the morph sequence. The photos are firstly manually aligned in photoshop, and then use dlib to find feature points. The morph sequence is similar to what I did in part 2. The Delauney triangulation is done between each pair of image to avoid large deformation. The gif is as follows:
What I found interesting is even I used sub-population categories of me (some smiling and some smile with teeth (Duchenne smile)), the morphed results are satisfying. It perhaps due to I didn't have a sudden change between photos, but with small teeth, and larger, etc. The intermediate smile works as a coordiation.
Reference
[1] M. B. Stegmann, B. K. Ersb¿ll, and R. Larsen. FAME { a °exible appearance modelling environment. IEEE Trans. on Medical Imaging, 22(10):1319{1331, 2003Retrived from https://web.archive.org/web/20210305094647/http://www2.imm.dtu.dk/~aam/datasets/datasets.html
[2] N. Boyko, O. Basystiuk and N. Shakhovska, "Performance Evaluation and Comparison of Software for Face Recognition, Based on Dlib and Opencv Library," 2018 IEEE Second International Conference on Data Stream Mining & Processing (DSMP), 2018, pp. 478-482, doi: 10.1109/DSMP.2018.8478556.