Stay Hungry,Stay Foolish!

langchain serialization

langchain serialization

https://zhuanlan.zhihu.com/p/640693519

8序列化

这个笔记本涵盖了如何将链序列化到磁盘和从磁盘序列化。我们使用的序列化格式是json或yaml。目前,只有一些链支持这种类型的序列化。我们将随着时间的推移增加支持的链的数量。

将一个链保存到磁盘

首先,我们来看看如何将一个链保存到磁盘。这可以通过.save方法来完成,并指定一个带有json或yaml扩展名的文件路径。

from langchain import PromptTemplate, OpenAI, LLMChain

template = """Question: {question}

Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0), verbose=True)

llm_chain.save("llm_chain.json")

cat llm_chain.json

    {
        "memory": null,
        "verbose": true,
        "prompt": {
            "input_variables": [
                "question"
            ],
            "output_parser": null,
            "template": "Question: {question}\n\nAnswer: Let's think step by step.",
            "template_format": "f-string"
        },
        "llm": {
            "model_name": "text-davinci-003",
            "temperature": 0.0,
            "max_tokens": 256,
            "top_p": 1,
            "frequency_penalty": 0,
            "presence_penalty": 0,
            "n": 1,
            "best_of": 1,
            "request_timeout": null,
            "logit_bias": {},
            "_type": "openai"
        },
        "output_key": "text",
        "_type": "llm_chain"
    }

从磁盘加载一个链

from langchain.chains import load_chain

chain = load_chain("llm_chain.json")

chain.run("whats 2 + 2")

    
    
    > Entering new LLMChain chain...
    Prompt after formatting:
    Question: whats 2 + 2
    
    Answer: Let's think step by step.
    
    > Finished chain.





    ' 2 + 2 = 4'

分开保存组件

在上面的例子中,我们可以看到提示信息和llm配置信息与整个链条保存在同一个json中。另外,我们也可以把它们拆开,单独保存。这对于使保存的组件更加模块化通常是有用的。为了做到这一点,我们只需要指定llm_path而不是llm组件,以及prompt_path而不是prompt组件。

llm_chain.prompt.save("prompt.json")

cat prompt.json

    {
        "input_variables": [
            "question"
        ],
        "output_parser": null,
        "template": "Question: {question}\n\nAnswer: Let's think step by step.",
        "template_format": "f-string"
    }

llm_chain.llm.save("llm.json")

cat llm.json

    {
        "model_name": "text-davinci-003",
        "temperature": 0.0,
        "max_tokens": 256,
        "top_p": 1,
        "frequency_penalty": 0,
        "presence_penalty": 0,
        "n": 1,
        "best_of": 1,
        "request_timeout": null,
        "logit_bias": {},
        "_type": "openai"
    }

config = {
    "memory": None,
    "verbose": True,
    "prompt_path": "prompt.json",
    "llm_path": "llm.json",
    "output_key": "text",
    "_type": "llm_chain",
}
import json

with open("llm_chain_separate.json", "w") as f:
    json.dump(config, f, indent=2)

cat llm_chain_separate.json

    {
      "memory": null,
      "verbose": true,
      "prompt_path": "prompt.json",
      "llm_path": "llm.json",
      "output_key": "text",
      "_type": "llm_chain"
    }
    
chain = load_chain("llm_chain_separate.json")

chain.run("whats 2 + 2")

    
    
    > Entering new LLMChain chain...
    Prompt after formatting:
    Question: whats 2 + 2
    
    Answer: Let's think step by step.
    
    > Finished chain.





    ' 2 + 2 = 4'

 

 

posted @ 2025-02-19 21:35  lightsong  阅读(14)  评论(0)    收藏  举报
千山鸟飞绝,万径人踪灭