In this paper, we present AISHELL-3, a large-scale and high-fidelity multi-speaker Mandarin speech corpus which could be used to train multi-speaker Text-to-Speech (TTS) systems.
The corpus contains roughly 85 hours of emotion-neutral recordings spoken by 218 native Chinese mandarin speakers.
Their auxiliary attributes such as gender, age group and native accents are explicitly marked and provided in the corpus.
Accordingly, transcripts in Chinese character-level and pinyin-level are provided along with the recordings.
We present a baseline system that uses AISHELL-3 for multi-speaker Madarin speech synthesis.
The multi-speaker speech synthesis system is an extension on Tacotron-2 where a speaker verification model and a corresponding loss regarding voice similarity are incorporated as the feedback constraint.
We aim to use the presented corpus to build a robust synthesis model that is able to achieve zero-shot voice cloning.
The system trained on this dataset also generalizes well on speakers that are never seen in the training process.
Objective evaluation results from our experiments show that the proposed multi-speaker synthesis system achieves high voice similarity concerning both speaker embedding similarity and equal error rate measurement.
The dataset, baseline system code and generated samples are available online.
Sample audios and labels from the AISHELL-3 dataset (in original 44.1kHz format)
The following sections exhibits audio samples generated by the Baseline TTS system described in detail in our paper.
(in down-sampled 16kHz format)
We use a 16kHz MelGAN trained on the presented dataset as our neural vocoder module.
listed below are pairs of original audio samples and their Mel-reconstructed counter-parts,
which intuitively demonstrates the performance of the vocoder used to produce the synthesized voices
presented in this web page.
below are pairs of original recordings and synthesized samples
conditioning on the same text and speaker embeddings (not ground-truth aligend samples),
all of the reference audios are separated from the training data:
Below are synthesized samples where the textual contents are an excerpt of a random customer-service phone call.