Voice2Mesh: Cross-Modal 3D Face Model Generation from Voices
Arxiv 2021

Abstract

overview

This work focuses on the analysis that whether 3D face models can be learned from only the speech inputs of speakers. Previous works for cross-modal face synthesis study image generation from voices. However, image synthesis includes variations such as hairstyles, backgrounds, and facial textures, that are arguably irrelevant to voice or without direct studies to show correlations. We instead investigate the ability to reconstruct 3D faces to concentrate on only geometry, which is more physiologically grounded. We propose both the supervised learning and unsupervised learning frameworks. Especially we demonstrate how unsupervised learning is possible in the absence of a direct voice-to-3D-face dataset under limited availability of 3D face scans when the model is equipped with knowledge distillation. To evaluate the performance, we also propose several metrics to measure the geometric fitness of two 3D faces based on points, lines, and regions. We find that 3D face shapes can be reconstructed from voices. Experimental results suggest that 3D faces can be reconstructed from voices, and our method can improve the performance over the baseline. The best performance gains (15% - 20%) on ear-to-ear distance ratio metric (ER) coincides with the intuition that one can roughly envision whether a speaker face is in overall wider or thinner only from a person's voice. Code and data will be released publicly.

Voice2Mesh: supervised setting

The unsupervised framework is shown as follows. This setting serves an ideal case that when paired voice and 3D face data exist. The supervised framework directly learns the 3D face reconstruction pipeline from paired voices and 3D faces.






Voice2Mesh: unsupervised setting

The unsupervised framework is shown as follows. This setting serves a more realistic purpose that it's very hard to obtain large-scale paired voice and 3D face data. The unsupervised framework utilizes the knowledge distillation (KD) to distill knacks from an image-to-3D-face expert to facilitate the unsupervised end-to-end training.

Result 1: different shape face mesh inferred from voices. The upper images are for reference.

scales



Result 2: comparison of generation 3D face mesh with the baseline.

scales



Result 3: generation consistency for the same identity using different utterances.

scales




Broader Impact

This work serves a pure academic research purpose for cross-modal learning. We aim to recover the potential statistical correlation between voices and 3D faces based on the physiological and psychological supportive evidence. As is always the case that machine learning and deep learning methods produce inferrence bias and variations, when there comes an uncommon voice and 3D facial geometry correpondence, or the speaker does not use one's natural voice, our model would not perform as good as more representative and typical cases. The same concern is also raised by prior work of 2D representation for this line of research.

As mentioned in the statement of their ethical considerations, hairstyle and color variations are their concerns; in constrast, our 3D approach leaves out these issues and only focus on the 3D facial geometry analysis. This better studies the statistical correlations between voices and 3D face shapes. From this perspective, our work is a concrete improvement to contain less ethical concerns of inference variations for this line of research.

The website template was borrowed from Michaƫl Gharbi