Optional
memoryOptional
config: any[]Use .batch() instead. Will be removed in 0.2.0.
This feature is deprecated and will be removed in the future.
It is not recommended for use.
Call the chain on all inputs in the list
Optional
config: anyOptional
tags: string[]Use .invoke() instead. Will be removed in 0.2.0.
Run the core logic of this chain and add to output if desired.
Wraps _call and handles memory.
Invoke the chain with the provided input and returns the output.
Input values for the chain run.
Optional
config: anyOptional configuration for the Runnable.
Promise that resolves with the output of the chain run.
Return a json-like object representing this chain.
Static
deserializeLoad a chain from a json-like object describing it.
Static
fromLLMAndStatic method that creates a new PlanAndExecuteAgentExecutor from a given LLM, a set of tools, and optionally a human message template. It uses the getDefaultPlanner and getDefaultStepExecutor methods to create the planner and step executor for the new agent executor.
A new PlanAndExecuteAgentExecutor instance.
Static
getStatic method that returns a default planner for the agent. It creates a new LLMChain with a given LLM and a fixed prompt, and uses it to create a new LLMPlanner with a PlanOutputParser.
The Large Language Model (LLM) used to generate responses.
A new LLMPlanner instance.
Static
getStatic method that returns a default step executor for the agent. It creates a new ChatAgent from a given LLM and a set of tools, and uses it to create a new ChainStepExecutor.
Optional
humanA new ChainStepExecutor instance.
Generated using TypeDoc
Class representing a plan-and-execute agent executor. This agent decides on the full sequence of actions upfront, then executes them all without updating the plan. This is suitable for complex or long-running tasks that require maintaining long-term objectives and focus.