Several shell scripts provided in ./examples/tiny/local
will help us to quickly give it a try, for most major modules, including data preparation, model training, case inference, and model evaluation, with a few public datasets (e.g. LibriSpeech, Aishell). Reading these examples will also help you to understand how to make it work with your data.
Some of the scripts in ./examples
are not configured with GPUs. If you want to train with 8 GPUs, please modify CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
. If you don't have any GPU available, please set CUDA_VISIBLE_DEVICES=
to use CPUs instead. Besides, if an out-of-memory problem occurs, just reduce batch_size
to fit.
Let's take a tiny sampled subset of LibriSpeech dataset for instance.
Go to the directory
cd examples/tiny
Notice that this is only a toy example with a tiny sampled subset of LibriSpeech. If you would like to try with the complete dataset (would take several days for training), please go to examples/librispeech
instead.
Source env
source path.sh
Must do this before you start to do anything.
Set MAIN_ROOT
as project dir. Using the default deepspeech2
model as MODEL
, you can change this in the script.
Main entry point
bash run.sh
This is just a demo, please make sure every step
works well before the next step
.
More detailed information is provided in the following sections. Wish you a happy journey with the DeepSpeech on PaddlePaddle ASR engine!
The key steps of training for the Mandarin language are the same as that of the English language and we have also provided an example for Mandarin training with Aishell in examples/aishell/local
. As mentioned above, please execute sh data.sh
, sh train.sh
and sh test.sh
to do data preparation, training, and testing correspondingly.
To evaluate a model's performance quantitatively, please run:
CUDA_VISIBLE_DEVICES=0 bash local/test.sh
The error rate (default: word error rate; can be set with error_rate_type
) will be printed.
We provide two types of CTC decoders: CTC greedy decoder and CTC beam search decoder. The CTC greedy decoder is an implementation of the simple best-path decoding algorithm, selecting at each timestep the most likely token, thus being greedy and locally optimal. The CTC beam search decoder otherwise utilizes a heuristic breadth-first graph search for reaching near-global optimality; it also requires a pre-trained KenLM language model for better scoring and ranking. The decoder type can be set with the argument decoding_method
.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。