HuggingFace

Pipeline  

pipeline 模块把所有东西都封装死了,只需要传进去原始输入,就能得到输出. 

例:遮掩词填空,可以看出 pipeline function 给出了 5 个备选答案   

from transformers import pipeline   

classifier = pipeline("fill-mask")    
y_pred = classifier("I love <mask> very much.") 

print(y_pred)  
"""
[
    {'score': 0.09382506459951401, 'token': 123, 'token_str': ' him', 'sequence': 'I love him very much.'}, 
    {'score': 0.06408175826072693, 'token': 47, 'token_str': ' you', 'sequence': 'I love you very much.'}, 
    {'score': 0.056255027651786804, 'token': 69, 'token_str': ' her', 'sequence': 'I love her very much.'}, 
    {'score': 0.017606642097234726, 'token': 106, 'token_str': ' them', 'sequence': 'I love them very much.'}, 
    {'score': 0.016162296757102013, 'token': 24, 'token_str': ' it', 'sequence': 'I love it very much.'}
] 
"""

  

 

Tokenizer  

tokenizer 是分词器,对输入的单词进行预处理,可能会将单词拆开(例如,dogs 拆成 dog + s)  

一般来说,tokenizer 的处理结果和后面的大模型应当是配套的(显然,不同大模型有不同的拆分方案)  

一般来说,会有 input_ids 和 attention_mask 这两项,前面的 input_ids 就是拆分后词在语料库中的编号,然后后面 attention_mask 为 0 代表着没东西(是被 padding 的位置) 

from transformers import pipeline   
from transformers import AutoTokenizer 


checkpoint = "distilbert-base-uncased-finetuned-sst-2-English"   
tokenizer = AutoTokenizer.from_pretrained(checkpoint)    

raw_inputs = [
    "I've been waiting this for a lifetime!", 
    "I love Tom Brady."  
]

tokenized_inputs = tokenizer(raw_inputs, padding = True)  
print(tokenized_inputs) 


"""
{
    'input_ids': 
        [
            [101, 1045, 1005, 2310, 2042, 3403, 2023, 2005, 1037, 6480, 999, 102], 
            [101, 1045, 2293, 3419, 10184, 1012, 102, 0, 0, 0, 0, 0]
        ], 
    'attention_mask': 
        [
            [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 
            [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0]
        ]
    }

"""

  

Model   

输入的句子经过 tokenizer 的预处理后就可以喂给 model(真正的大模型) 了. 

model 的输出是未经过标准化/激活函数的向量,所以说想要得到最后的结果还需要自己写一下.   

from transformers import pipeline   
from transformers import AutoTokenizer 
from transformers import AutoModel 

checkpoint = "distilbert-base-uncased-finetuned-sst-2-English"   
tokenizer = AutoTokenizer.from_pretrained(checkpoint)    

raw_inputs = [
    "I've been waiting this for a lifetime!", 
    "I love Tom Brady."  
]

tokenized_inputs = tokenizer(raw_inputs, padding = True, return_tensors = "pt")   


model = AutoModel.from_pretrained(checkpoint)  
outputs = model(**tokenized_inputs)  

print(outputs.last_hidden_state.shape) 
# torch.Size([2, 12, 768])

例如,可以针对单词分类这个任务写一个 softmax : 

from transformers import pipeline   
from transformers import AutoTokenizer 
from transformers import AutoModel, AutoModelForTokenClassification
import torch 

checkpoint = "distilbert-base-uncased-finetuned-sst-2-English"   
tokenizer = AutoTokenizer.from_pretrained(checkpoint)    

raw_inputs = [
    "I've been waiting this for a lifetime!", 
    "I love Tom Brady."  
]

tokenized_inputs = tokenizer(raw_inputs, padding = True, return_tensors = "pt")   


model = AutoModelForTokenClassification.from_pretrained(checkpoint)  
outputs = model(**tokenized_inputs)  
print(outputs.logits) 

predictions = torch.nn.functional.softmax(outputs.logits, dim = -1)    
print(predictions) 

  

 

posted @ 2023-05-22 19:20  guangheli  阅读(160)  评论(0)    收藏  举报