In the README of pre-training, it mentions that the dataset, vocab and roberta_zh have to be prepared before training.
Is there any example of the files in the dataset and vocab folder?
Also, what do you mean by "Place the checkpoint of Chinese RoBERTa"? I would like to train Chinese BART.
Last, if I wish to replace Jieba tokenizer with my custom tokenizer, how can I do so? Thanks.