Improving Generative Adversarial Network based Vocoding Through Multi-Scale Convolution
Wanting Li, Yiting Chen, Buzhou Tang- General Computer Science
Vocoding is a sub-process of text-to-speech task, which aims at generating audios from intermediate representations between text and audio. Several recent works have shown that generative adversarial network (GAN) based vocoders can generate audios with high quality. While GAN-based neural vocoders have shown higher efficiency in generating speed than autoregressive vocoders, the audio fidelity still cannot compete with ground truth samples. One major cause of the degradation in audio quality and spectrogram vague comes from the average pooling layers in discriminator. As the multi-scale discriminator (MSD) commonly used by recent GAN-based vocoders applies several average pooling layers to capture different frequency bands, we believe it is crucial to prevent the high frequency information from leakage in the average pooling process. This paper proposes MSCGAN, which solves the above-mentioned problem and achieves higher-fidelity speech synthesis. We demonstrate that substituting the average pooling process with a multi-scale convolution architecture effectively retains high frequency features and thus forces the generator to recover audio details in time and frequency domain. Compared with other state-of-the-art GAN based vocoders, MSCGAN can produce competitive audio with a higher spectrogram clarity and MOS score in subjective human evaluation.