[LangChain] 08.Streaming Guide
LangChain.js v1.0 Streaming Guide
Overview
In LangChain.js v1.0, streaming has changed significantly from v0.x. This guide shows you how to properly implement streaming in the latest version.
Key Changes from v0.x to v1.0
❌ What NO LONGER works in v1.0:
// This won't work in v1.0
const res = await chain.stream({ question: "什么是AI" });
res.on("data", (chunk) => {
process.stdout.write(chunk.content);
});
// This also won't work
for await (const chunk of chain.stream({ question: "什么是AI" })) {
process.stdout.write(chunk);
}
✅ What works in v1.0:
Method 1: Direct Model Streaming (Recommended)
import { ChatOllama } from "@langchain/ollama";
const model = new ChatOllama({
model: "llama3",
temperature: 0.7,
});
// Stream directly from the model
for await (const chunk of await model.stream([
{ role: "user", content: "请用中文解释:什么是AI" },
])) {
process.stdout.write(chunk.content);
}
Method 2: Using streamEvents for Chains
import { ChatOllama } from "@langchain/ollama";
import { PromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
const pt = PromptTemplate.fromTemplate("请用中文解释:{question}");
const model = new ChatOllama({ model: "llama3", temperature: 0.7 });
const parser = new StringOutputParser();
const chain = pt.pipe(model).pipe(parser);
// Use streamEvents for chain streaming
for await (const event of await chain.streamEvents(
{ question: "什么是AI" },
{ version: "v2" }
)) {
if (event.event === "on_chat_model_stream") {
process.stdout.write(event.data?.chunk?.content || "");
}
}
Method 3: Collecting Chunks
// Collect all chunks into a single response
const chunks = [];
for await (const chunk of await model.stream([
{ role: "user", content: "请用中文解释:什么是AI" },
])) {
chunks.push(chunk.content);
}
const fullResponse = chunks.join("");
console.log("Full response:", fullResponse);
Method 4: Error Handling
try {
for await (const chunk of await model.stream([
{ role: "user", content: "请用中文解释:什么是AI" },
])) {
process.stdout.write(chunk.content);
}
} catch (error) {
console.error("Streaming error:", error);
}
Method 5: Custom Processing
let wordCount = 0;
for await (const chunk of await model.stream([
{ role: "user", content: "请用中文解释:什么是AI" },
])) {
process.stdout.write(chunk.content);
wordCount += chunk.content.split(" ").length;
}
console.log(`Word count: ${wordCount}`);
Key Differences
- Input Format: Use message arrays
[{ role: "user", content: "..." }]instead of object format - Method: Use
model.stream()instead ofchain.stream() - Content Access: Access content via
chunk.contentinstead ofchunk - Chain Streaming: Use
streamEvents()for chains instead ofstream()
Best Practices
- Use direct model streaming for simple cases
- Use streamEvents when you need to stream through chains
- Always handle errors in streaming operations
- Use async/await with for...of loops for clean code
- Access chunk.content to get the actual text content
Examples in this directory:
index2.js- Basic streaming examplestreaming-examples.js- Comprehensive examplesstreaming-comparison.js- Correct vs incorrect patternschain-streaming.js- Chain streaming with streamEvents

浙公网安备 33010602011771号