SingingHead: A Large-scale 4D Dataset for Singing Head Animation

Sijing Wu
Yunhao Li
Weitian Zhang
Jun Jia
Yucheng Zhu
         
Paper
            
Code

We present a new facial animation dataset, SingingHead, which contains more than 27 hours' synchronized singing video, 3D facial motion, singing audio, and background music (BGM) collected from 76 subjects. Along with the SingingHead dataset, we propose a unified framework, UniSinger, to generate both 3D facial motion and 2D singing portrait video according to the input singing audio.

Abstract


Singing, as a common facial movement second only to talking, can be regarded as a universal language across ethnicities and cultures, plays an important role in emotional communication, art, and entertainment. However, it is often overlooked in the field of audio-driven facial animation due to the lack of singing head datasets and the domain gap between singing and talking in rhythm and amplitude. To this end, we collect a high-quality large-scale singing head dataset, SingingHead, which consists of more than 27 hours of synchronized singing video, 3D facial motion, singing audio, and background music from 76 individuals and 8 types of music. Along with the SingingHead dataset, we argue that 3D and 2D facial animation tasks can be solved together, and propose a unified singing facial animation framework named UniSinger to achieve both singing audio-driven 3D singing head animation and 2D singing portrait video synthesis. Extensive comparative experiments with both SOTA 3D facial animation and 2D portrait animation methods demonstrate the necessity of singing-specific datasets in singing head animation tasks and the promising performance of our unified facial animation framework.

Demo Video


The demo video shows some samples of the generated 3D facial motion and 2D singing portrait video according to the input singing audio.