04 2022 档案

摘要:https://blog.csdn.net/marmove/rss/list?spm=1001.2014.3001.5494 链接: https://pan.baidu.com/s/1H2mxeRMui1k1VySI80EZaQ 提取码: yaij 阅读全文
posted @ 2022-04-27 15:06 zxcayumi 阅读(101) 评论(0) 推荐(0)
摘要:1)Augmenter WordNet (https://github.com/QData/TextAttack) 2)Augmenter Contextual(https://github.com/makcedward/nlpaug) 3)Paraphrase via back translati 阅读全文
posted @ 2022-04-26 23:09 zxcayumi 阅读(83) 评论(0) 推荐(0)
摘要:google的bert预训练模型: BERT-Large, Uncased (Whole Word Masking): 24-layer, 1024-hidden, 16-heads, 340M parameters BERT-Large, Cased (Whole Word Masking): 2 阅读全文
posted @ 2022-04-26 18:20 zxcayumi 阅读(1816) 评论(0) 推荐(0)
摘要:1 import torch 2 from transformers import BertTokenizer, BertModel, BertForMaskedLM 3 from transformers import logging 4 logging.set_verbosity_error() 阅读全文
posted @ 2022-04-24 20:25 zxcayumi 阅读(254) 评论(0) 推荐(0)
摘要:https://github.com/MLNLP-World/Paper_Writing_Tips 阅读全文
posted @ 2022-04-22 20:00 zxcayumi 阅读(30) 评论(0) 推荐(1)
摘要:1 #!/usr/bin/env Python 2 # coding=utf-8 3 4 from transformers import GPT2LMHeadModel, GPT2Tokenizer 5 import torch 6 7 # 初始化GPT2模型的Tokenizer类. 8 toke 阅读全文
posted @ 2022-04-13 18:37 zxcayumi 阅读(144) 评论(0) 推荐(0)