摘要:
1)Augmenter WordNet (https://github.com/QData/TextAttack) 2)Augmenter Contextual(https://github.com/makcedward/nlpaug) 3)Paraphrase via back translati 阅读全文
摘要:
google的bert预训练模型: BERT-Large, Uncased (Whole Word Masking): 24-layer, 1024-hidden, 16-heads, 340M parameters BERT-Large, Cased (Whole Word Masking): 2 阅读全文