摘要:
1)Augmenter WordNet (https://github.com/QData/TextAttack) 2)Augmenter Contextual(https://github.com/makcedward/nlpaug) 3)Paraphrase via back translati 阅读全文
posted @ 2022-04-26 23:09
zxcayumi
阅读(81)
评论(0)
推荐(0)
摘要:
google的bert预训练模型: BERT-Large, Uncased (Whole Word Masking): 24-layer, 1024-hidden, 16-heads, 340M parameters BERT-Large, Cased (Whole Word Masking): 2 阅读全文
posted @ 2022-04-26 18:20
zxcayumi
阅读(1816)
评论(0)
推荐(0)

浙公网安备 33010602011771号