Overview
LangChainβs streaming system lets you surface live feedback from agent runs to your application. Whatβs possible with LangChain streaming:- Stream agent progress β get state updates after each agent step.
- Stream LLM tokens β stream language model tokens as theyβre generated.
- Stream custom updates β emit user-defined signals (e.g.,
"Fetched 10/100 records"). - Stream multiple modes β choose from
updates(agent progress),messages(LLM tokens + metadata), orcustom(arbitrary user data).
Supported stream modes
Pass one or more of the following stream modes as a list to thestream or astream methods:
| Mode | Description |
|---|---|
updates | Streams state updates after each agent step. If multiple updates are made in the same step (e.g., multiple nodes are run), those updates are streamed separately. |
messages | Streams tuples of (token, metadata) from any graph nodes where an LLM is invoked. |
custom | Streams custom data from inside your graph nodes using the stream writer. |
Agent progress
To stream agent progress, use thestream or astream methods with stream_mode="updates". This emits an event after every agent step.
For example, if you have an agent that calls a tool once, you should see the following updates:
- LLM node:
AIMessagewith tool call requests - Tool node:
ToolMessagewith execution result - LLM node: Final AI response
Streaming agent progress
Output
LLM tokens
To stream tokens as they are produced by the LLM, usestream_mode="messages". Below you can see the output of the agent streaming tool calls and the final response.
Streaming LLM tokens
Output
Custom updates
To stream updates from tools as they are executed, you can useget_stream_writer.
Streaming custom updates
Output
If you add
get_stream_writer inside your tool, you wonβt be able to invoke the tool outside of a LangGraph execution context.Stream multiple modes
You can specify multiple streaming modes by passing stream mode as a list:stream_mode=["updates", "custom"].
The streamed outputs will be tuples of (mode, chunk) where mode is the name of the stream mode and chunk is the data streamed by that mode.
Streaming multiple modes
Output
Disable streaming
In some applications you might need to disable streaming of individual tokens for a given model. This is useful when:- Working with multi-agent systems to control which agents stream their output
- Mixing models that support streaming with those that do not
- Deploying to LangSmith and wanting to prevent certain model outputs from being streamed to the client
streaming=False when initializing the model.
Not all chat model integrations support the
streaming parameter. If your model doesnβt support it, use disable_streaming=True instead. This parameter is available on all chat models via the base class.Related
- Streaming with chat models β Stream tokens directly from a chat model without using an agent or graph
- Streaming with human-in-the-loop β Stream agent progress while handling interrupts for human review
- LangGraph streaming β Advanced streaming options including
values,debugmodes, and subgraph streaming