04 2022 档案
摘要:https://blog.csdn.net/marmove/rss/list?spm=1001.2014.3001.5494 链接: https://pan.baidu.com/s/1H2mxeRMui1k1VySI80EZaQ 提取码: yaij
阅读全文
摘要:1)Augmenter WordNet (https://github.com/QData/TextAttack) 2)Augmenter Contextual(https://github.com/makcedward/nlpaug) 3)Paraphrase via back translati
阅读全文
摘要:google的bert预训练模型: BERT-Large, Uncased (Whole Word Masking): 24-layer, 1024-hidden, 16-heads, 340M parameters BERT-Large, Cased (Whole Word Masking): 2
阅读全文
摘要:1 import torch 2 from transformers import BertTokenizer, BertModel, BertForMaskedLM 3 from transformers import logging 4 logging.set_verbosity_error()
阅读全文
摘要:https://github.com/MLNLP-World/Paper_Writing_Tips
阅读全文
摘要:1 #!/usr/bin/env Python 2 # coding=utf-8 3 4 from transformers import GPT2LMHeadModel, GPT2Tokenizer 5 import torch 6 7 # 初始化GPT2模型的Tokenizer类. 8 toke
阅读全文

浙公网安备 33010602011771号