Chains
Chains refer to sequences of calls - whether to an LLM, a tool, or a data preprocessing step. The primary supported way to do this is with LCEL.
LCEL is great for constructing your own chains, but itβs also nice to have chains that you can use off-the-shelf. There are two types of off-the-shelf chains that LangChain supports:
Chains that are built with LCEL. In this case, LangChain offers a higher-level constructor method. However, all that is being done under the hood is constructing a chain with LCEL.
[Legacy] Chains constructed by subclassing from a legacy
Chain
class. These chains do not use LCEL under the hood but are rather standalone classes.
We are working creating methods that create LCEL versions of all chains. We are doing this for a few reasons.
Chains constructed in this way are nice because if you want to modify the internals of a chain you can simply modify the LCEL.
These chains natively support streaming, async, and batch out of the box.
These chains automatically get observability at each step.
This page contains two lists. First, a list of all LCEL chain constructors. Second, a list of all legacy Chains.
LCEL Chainsβ
Below is a table of all LCEL chain constructors. In addition, we report on:
Chain Constructor
The constructor function for this chain. These are all methods that return LCEL runnables. We also link to the API documentation.
Function Calling
Whether this requires OpenAI function calling.
Other Tools
What other tools (if any) are used in this chain.
When to Use
Our commentary on when to use this chain.
Chain Constructor | Function Calling | Other Tools | When to Use |
---|---|---|---|
create_stuff_documents_chain | This chain takes a list of documents and formats them all into a prompt, then passes that prompt to an LLM. It passes ALL documents, so you should make sure it fits within the context window the LLM you are using. | ||
create_openai_fn_runnable | β | If you want to use OpenAI function calling to OPTIONALLY structured an output response. You may pass in multiple functions for it call, but it does not have to call it. | |
create_structured_output_runnable | β | If you want to use OpenAI function calling to FORCE the LLM to respond with a certain function. You may only pass in one function, and the chain will ALWAYS return this response. | |
load_query_constructor_runnable | Can be used to generate queries. You must specify a list of allowed operations, and then will return a runnable that converts a natural language query into those allowed operations. | ||
create_sql_query_chain | SQL Database | If you want to construct a query for a SQL database from natural language. | |
create_history_aware_retriever | Retriever | This chain takes in conversation history and then uses that to generate a search query which is passed to the underlying retriever. | |
create_retrieval_chain | Retriever | This chain takes in a user inquiry, which is then passed to the retriever to fetch relevant documents. Those documents (and original inputs) are then passed to an LLM to generate a response |
Legacy Chainsβ
Below we report on the legacy chain types that exist. We will maintain support for these until we are able to create a LCEL alternative. We report on:
Chain
Name of the chain, or name of the constructor method. If constructor
method, this will return a Chain
subclass.
Function Calling
Whether this requires OpenAI Function Calling.
Other Tools
Other tools used in the chain.
When to Use
Our commentary on when to use.
Chain | Function Calling | Other Tools | When to Use |
---|---|---|---|
APIChain | Requests Wrapper | This chain uses an LLM to convert a query into an API request, then executes that request, gets back a response, and then passes that request to an LLM to respond | |
OpenAPIEndpointChain | OpenAPI Spec | Similar to APIChain, this chain is designed to interact with APIs. The main difference is this is optimized for ease of use with OpenAPI endpoints | |
ConversationalRetrievalChain | Retriever | This chain can be used to have conversations with a document. It takes in a question and (optional) previous conversation history. If there is previous conversation history, it uses an LLM to rewrite the conversation into a query to send to a retriever (otherwise it just uses the newest user input). It then fetches those documents and passes them (along with the conversation) to an LLM to respond. | |
StuffDocumentsChain | This chain takes a list of documents and formats them all into a prompt, then passes that prompt to an LLM. It passes ALL documents, so you should make sure it fits within the context window the LLM you are using. | ||
ReduceDocumentsChain | This chain combines documents by iterative reducing them. It groups documents into chunks (less than some context length) then passes them into an LLM. It then takes the responses and continues to do this until it can fit everything into one final LLM call. Useful when you have a lot of documents, you want to have the LLM run over all of them, and you can do in parallel. | ||
MapReduceDocumentsChain | This chain first passes each document through an LLM, then reduces them using the ReduceDocumentsChain. Useful in the same situations as ReduceDocumentsChain, but does an initial LLM call before trying to reduce the documents. | ||
RefineDocumentsChain | This chain collapses documents by generating an initial answer based on the first document and then looping over the remaining documents to refine its answer. This operates sequentially, so it cannot be parallelized. It is useful in similar situatations as MapReduceDocuments Chain, but for cases where you want to build up an answer by refining the previous answer (rather than parallelizing calls). | ||
MapRerankDocumentsChain | This calls on LLM on each document, asking it to not only answer but also produce a score of how confident it is. The answer with the highest confidence is then returned. This is useful when you have a lot of documents, but only want to answer based on a single document, rather than trying to combine answers (like Refine and Reduce methods do). | ||
ConstitutionalChain | This chain answers, then attempts to refine its answer based on constitutional principles that are provided. Use this when you want to enforce that a chainβs answer follows some principles. | ||
LLMChain | |||
ElasticsearchDatabaseChain | ElasticSearch Instance | This chain converts a natural language question to an ElasticSearch query, and then runs it, and then summarizes the response. This is useful for when you want to ask natural language questions of an Elastic Search database | |
FlareChain | This implements FLARE, an advanced retrieval technique. It is primarily meant as an exploratory advanced retrieval method. | ||
ArangoGraphQAChain | Arango Graph | This chain constructs an Arango query from natural language, executes that query against the graph, and then passes the results back to an LLM to respond. | |
GraphCypherQAChain | A graph that works with Cypher query language | This chain constructs an Cypher query from natural language, executes that query against the graph, and then passes the results back to an LLM to respond. | |
FalkorDBGraphQAChain | Falkor Database | This chain constructs a FalkorDB query from natural language, executes that query against the graph, and then passes the results back to an LLM to respond. | |
HugeGraphQAChain | HugeGraph | This chain constructs an HugeGraph query from natural language, executes that query against the graph, and then passes the results back to an LLM to respond. | |
KuzuQAChain | Kuzu Graph | This chain constructs a Kuzu Graph query from natural language, executes that query against the graph, and then passes the results back to an LLM to respond. | |
NebulaGraphQAChain | Nebula Graph | This chain constructs a Nebula Graph query from natural language, executes that query against the graph, and then passes the results back to an LLM to respond. | |
NeptuneOpenCypherQAChain | Neptune Graph | This chain constructs an Neptune Graph query from natural language, executes that query against the graph, and then passes the results back to an LLM to respond. | |
GraphSparqlChain | Graph that works with SparQL | This chain constructs an SparQL query from natural language, executes that query against the graph, and then passes the results back to an LLM to respond. | |
LLMMath | This chain converts a user question to a math problem and then executes it (using numexpr) | ||
LLMCheckerChain | This chain uses a second LLM call to varify its initial answer. Use this when you to have an extra layer of validation on the initial LLM call. | ||
LLMSummarizationChecker | This chain creates a summary using a sequence of LLM calls to make sure it is extra correct. Use this over the normal summarization chain when you are okay with multiple LLM calls (eg you care more about accuracy than speed/cost). | ||
create_citation_fuzzy_match_chain | β | Uses OpenAI function calling to answer questions and cite its sources. | |
create_extraction_chain | β | Uses OpenAI Function calling to extract information from text. | |
create_extraction_chain_pydantic | β | Uses OpenAI function calling to extract information from text into a Pydantic model. Compared to create_extraction_chain this has a tighter integration with Pydantic. | |
get_openapi_chain | β | OpenAPI Spec | Uses OpenAI function calling to query an OpenAPI. |
create_qa_with_structure_chain | β | Uses OpenAI function calling to do question answering over text and respond in a specific format. | |
create_qa_with_sources_chain | β | Uses OpenAI function calling to answer questions with citations. | |
QAGenerationChain | Creates both questions and answers from documents. Can be used to generate question/answer pairs for evaluation of retrieval projects. | ||
RetrievalQAWithSourcesChain | Retriever | Does question answering over retrieved documents, and cites it sources. Use this when you want the answer response to have sources in the text response. Use this over load_qa_with_sources_chain when you want to use a retriever to fetch the relevant document as part of the chain (rather than pass them in). | |
load_qa_with_sources_chain | Retriever | Does question answering over documents you pass in, and cites it sources. Use this when you want the answer response to have sources in the text response. Use this over RetrievalQAWithSources when you want to pass in the documents directly (rather than rely on a retriever to get them). | |
RetrievalQA | Retriever | This chain first does a retrieval step to fetch relevant documents, then passes those documents into an LLM to generate a response. | |
MultiPromptChain | This chain routes input between multiple prompts. Use this when you have multiple potential prompts you could use to respond and want to route to just one. | ||
MultiRetrievalQAChain | Retriever | This chain routes input between multiple retrievers. Use this when you have multiple potential retrievers you could fetch relevant documents from and want to route to just one. | |
EmbeddingRouterChain | This chain uses embedding similarity to route incoming queries. | ||
LLMRouterChain | This chain uses an LLM to route between potential options. | ||
load_summarize_chain | |||
LLMRequestsChain | This chain constructs a URL from user input, gets data at that URL, and then summarizes the response. Compared to APIChain, this chain is not focused on a single API spec but is more general |