Integrating with LangServe
LangServe is a Python framework that helps developers deploy LangChain runnables and chains as REST APIs.
If you have a deployed LangServe route, you can use the RemoteRunnable class to interact with it as if it were a local chain. This allows you to more easily call hosted LangServe instances from JavaScript environments (like in the browser on the frontend).
You'll need to install or package LangChain core into your frontend:
- npm
- Yarn
- pnpm
npm install @langchain/core
yarn add @langchain/core
pnpm add @langchain/core
Usage
Then, you can use any of the supported LCEL interface methods. Here's an example of how this looks:
import { RemoteRunnable } from "@langchain/core/runnables/remote";
const remoteChain = new RemoteRunnable({
url: "https://your_hostname.com/path",
});
const result = await remoteChain.invoke({
param1: "param1",
param2: "param2",
});
console.log(result);
const stream = await remoteChain.stream({
param1: "param1",
param2: "param2",
});
for await (const chunk of stream) {
console.log(chunk);
}
API Reference:
- RemoteRunnable from
@langchain/core/runnables/remote
streamLog
is a lower level method for streaming chain intermediate steps as partial JSONPatch chunks.
This method allows for a few extra options as well to only include or exclude certain named steps:
import { RemoteRunnable } from "@langchain/core/runnables/remote";
const remoteChain = new RemoteRunnable({
// url: "https://your_hostname.com/path",
url: "https://chat-langchain-backend.langchain.dev/chat",
});
const logStream = await remoteChain.streamLog(
{
question: "What is a document loader?",
},
// LangChain runnable config properties
{
configurable: {
llm: "openai_gpt_3_5_turbo",
},
metadata: {
conversation_id: "other_metadata",
},
},
// Optional additional streamLog properties for filtering outputs
{
// includeNames: [],
// includeTags: [],
// includeTypes: [],
// excludeNames: [],
// excludeTags: [],
// excludeTypes: [],
}
);
let currentState;
for await (const chunk of logStream) {
if (!currentState) {
currentState = chunk;
} else {
currentState = currentState.concat(chunk);
}
}
console.log(currentState);
/*
RunLog {
ops: [
{ op: 'replace', path: '', value: [Object] },
{
op: 'add',
path: '/logs/RunnableParallel<question,chat_history>',
value: [Object]
},
{ op: 'add', path: '/logs/Itemgetter:question', value: [Object] },
{ op: 'add', path: '/logs/SerializeHistory', value: [Object] },
{
op: 'add',
path: '/logs/Itemgetter:question/streamed_output/-',
value: 'What is a document loader?'
},
{
op: 'add',
path: '/logs/SerializeHistory/streamed_output/-',
value: []
},
{
op: 'add',
path: '/logs/RunnableParallel<question,chat_history>/streamed_output/-',
value: [Object]
},
{ op: 'add', path: '/logs/RetrieveDocs', value: [Object] },
{ op: 'add', path: '/logs/RunnableSequence', value: [Object] },
{
op: 'add',
path: '/logs/RunnableParallel<question,chat_history>/streamed_output/-',
value: [Object]
},
... 558 more items
],
state: {
id: '415ba646-a3e0-4c76-bff6-4f5f34305244',
streamed_output: [
'', 'A', ' document', ' loader', ' is',
' a', ' tool', ' used', ' to', ' load',
' data', ' from', ' a', ' source', ' as',
' `', 'Document', '`', "'", 's',
',', ' which', ' are', ' pieces', ' of',
' text', ' with', ' associated', ' metadata', '.',
' It', ' can', ' load', ' data', ' from',
' various', ' sources', ',', ' such', ' as',
' a', ' simple', ' `.', 'txt', '`',
' file', ',', ' the', ' text', ' contents',
' of', ' a', ' web', ' page', ',',
' or', ' a', ' transcript', ' of', ' a',
' YouTube', ' video', '.', ' Document', ' loaders',
' provide', ' a', ' "', 'load', '"',
' method', ' for', ' loading', ' data', ' as',
' documents', ' from', ' a', ' configured', ' source',
' and', ' can', ' optionally', ' implement', ' a',
' "', 'lazy', ' load', '"', ' for',
' loading', ' data', ' into', ' memory', '.',
' [', '1', ']', ''
],
final_output: 'A document loader is a tool used to load data from a source as `Document`\'s, which are pieces of text with associated metadata. It can load data from various sources, such as a simple `.txt` file, the text contents of a web page, or a transcript of a YouTube video. Document loaders provide a "load" method for loading data as documents from a configured source and can optionally implement a "lazy load" for loading data into memory. [1]',
logs: {
'RunnableParallel<question,chat_history>': [Object],
'Itemgetter:question': [Object],
SerializeHistory: [Object],
RetrieveDocs: [Object],
RunnableSequence: [Object],
RunnableLambda: [Object],
'RunnableLambda:2': [Object],
FindDocs: [Object],
HasChatHistoryCheck: [Object],
GenerateResponse: [Object],
RetrievalChainWithNoHistory: [Object],
'Itemgetter:question:2': [Object],
Retriever: [Object],
format_docs: [Object],
ChatPromptTemplate: [Object],
ChatOpenAI: [Object],
StrOutputParser: [Object]
},
name: '/chat',
type: 'chain'
}
}
*/
API Reference:
- RemoteRunnable from
@langchain/core/runnables/remote
Configuration
You can also pass in options for headers and timeouts into the constructor. These will be used when making incoming requests:
import { RemoteRunnable } from "langchain/runnables/remote";
const remoteChain = new RemoteRunnable({
url: "https://your_hostname.com/path",
options: {
timeout: 10000,
headers: {
Authorization: "Bearer YOUR_TOKEN",
},
},
});
const result = await remoteChain.invoke({
param1: "param1",
param2: "param2",
});
console.log(result);
API Reference:
- RemoteRunnable from
langchain/runnables/remote