PydanticAI --- Agent Framework / shim to use Pydantic with LLMs
PydanticAI
https://ai.pydantic.dev/
PydanticAI is a Python agent framework designed to make it less painful to build production grade applications with Generative AI.
PydanticAI is a Python Agent Framework designed to make it less painful to build production grade applications with Generative AI.
FastAPI revolutionized web development by offering an innovative and ergonomic design, built on the foundation of Pydantic.
Similarly, virtually every agent framework and LLM library in Python uses Pydantic, yet when we began to use LLMs in Pydantic Logfire, we couldn't find anything that gave us the same feeling.
We built PydanticAI with one simple aim: to bring that FastAPI feeling to GenAI app development.
https://github.com/pydantic/pydantic-ai?tab=readme-ov-file
from dataclasses import dataclass from pydantic import BaseModel, Field from pydantic_ai import Agent, RunContext from bank_database import DatabaseConn # SupportDependencies is used to pass data, connections, and logic into the model that will be needed when running # system prompt and tool functions. Dependency injection provides a type-safe way to customise the behavior of your agents. @dataclass class SupportDependencies: customer_id: int db: DatabaseConn # This pydantic model defines the structure of the result returned by the agent. class SupportResult(BaseModel): support_advice: str = Field(description='Advice returned to the customer') block_card: bool = Field(description="Whether to block the customer's card") risk: int = Field(description='Risk level of query', ge=0, le=10) # This agent will act as first-tier support in a bank. # Agents are generic in the type of dependencies they accept and the type of result they return. # In this case, the support agent has type `Agent[SupportDependencies, SupportResult]`. support_agent = Agent( 'openai:gpt-4o', deps_type=SupportDependencies, # The response from the agent will, be guaranteed to be a SupportResult, # if validation fails the agent is prompted to try again. result_type=SupportResult, system_prompt=( 'You are a support agent in our bank, give the ' 'customer support and judge the risk level of their query.' ), ) # Dynamic system prompts can make use of dependency injection. # Dependencies are carried via the `RunContext` argument, which is parameterized with the `deps_type` from above. # If the type annotation here is wrong, static type checkers will catch it. @support_agent.system_prompt async def add_customer_name(ctx: RunContext[SupportDependencies]) -> str: customer_name = await ctx.deps.db.customer_name(id=ctx.deps.customer_id) return f"The customer's name is {customer_name!r}" # `tool` let you register functions which the LLM may call while responding to a user. # Again, dependencies are carried via `RunContext`, any other arguments become the tool schema passed to the LLM. # Pydantic is used to validate these arguments, and errors are passed back to the LLM so it can retry. @support_agent.tool async def customer_balance( ctx: RunContext[SupportDependencies], include_pending: bool ) -> float: """Returns the customer's current account balance.""" # The docstring of a tool is also passed to the LLM as the description of the tool. # Parameter descriptions are extracted from the docstring and added to the parameter schema sent to the LLM. balance = await ctx.deps.db.customer_balance( id=ctx.deps.customer_id, include_pending=include_pending, ) return balance ... # In a real use case, you'd add more tools and a longer system prompt async def main(): deps = SupportDependencies(customer_id=123, db=DatabaseConn()) # Run the agent asynchronously, conducting a conversation with the LLM until a final response is reached. # Even in this fairly simple case, the agent will exchange multiple messages with the LLM as tools are called to retrieve a result. result = await support_agent.run('What is my balance?', deps=deps) # The result will be validated with Pydantic to guarantee it is a `SupportResult`, since the agent is generic, # it'll also be typed as a `SupportResult` to aid with static type checking. print(result.data) """ support_advice='Hello John, your current account balance, including pending transactions, is $123.45.' block_card=False risk=1 """ result = await support_agent.run('I just lost my card!', deps=deps) print(result.data) """ support_advice="I'm sorry to hear that, John. We are temporarily blocking your card to prevent unauthorized transactions." block_card=True risk=8 """
https://www.infoq.com/news/2024/12/pydanticai-framework-gen-ai/
The team behind Pydantic, widely used for data validation in Python, has announced the release of PydanticAI, a Python-based agent framework designed to ease the development of production-ready Generative AI applications. Positioned as a potential competitor to LangChain, PydanticAI introduces a type-safe, model-agnostic approach inspired by the design principles of FastAPI.
PydanticAI stands out with a range of features designed to simplify and enhance Generative AI application development:
- Model-Agnostic: Supports multiple AI models such as OpenAI, Anthropic, Gemini, Ollama, Groq, and Mistral, with an extensible interface for adding new models.
- Type-Safe Framework: Ensures robustness through structured response validation using Pydantic, even for streamed responses.
- Pythonic Design: Offers control flow and agent composition in pure Python, aligning with established development practices.
- Dependency Injection System: Provides a novel, type-safe dependency injection mechanism to support testing and iterative development.
- Logfire Integration: Enables real-time debugging and monitoring of LLM application behavior and performance.

浙公网安备 33010602011771号