🦜️🔗 Chat LangChain
This repo is an implementation of a locally hosted chatbot specifically focused on question answering over the LangChain documentation. Built with LangChain, FastAPI, and Next.js.
Deployed version: chat.langchain.com
The app leverages LangChain's streaming support and async API to update the page in real time for multiple users.
✅ Running locally
- Install backend dependencies:
pip install -r requirements.txt
. - Run
python ingest.py
to ingest LangChain docs data into the Weaviate vectorstore (only needs to be done once).- You can use other Document Loaders to load your own data into the vectorstore.
- Run the backend with
make start
.- Make sure to enter your environment variables to configure the application:
export OPENAI_API_KEY= export WEAVIATE_URL= export WEAVIATE_API_KEY= # for tracing export LANGCHAIN_TRACING_V2=true export LANGCHAIN_ENDPOINT="https://api.smith.langchain.com" export LANGCHAIN_API_KEY= export LANGCHAIN_PROJECT=
- Install frontend dependencies by running
cd chat-langchain
, thenyarn
. - Run the frontend with
yarn dev
for frontend. - Open localhost:3000 in your browser.
📚 Technical description
There are two components: ingestion and question-answering.
Ingestion has the following steps:
- Pull html from documentation site as well as the Github Codebase
- Load html with LangChain's RecursiveURLLoader Loader
- Transform html to text with Html2TextTransformer
- Split documents with LangChain's RecursiveCharacterTextSplitter
- Create a vectorstore of embeddings, using LangChain's Weaviate vectorstore wrapper (with OpenAI's embeddings).
Question-Answering has the following steps, all handled by OpenAIFunctionsAgent:
- Given the chat history and new user input, determine what a standalone question would be (using GPT-3.5).
- Given that standalone question, look up relevant documents from the vectorstore.
- Pass the standalone question and relevant documents to GPT-4 to generate and stream the final answer.
- Generate a trace URL for the current chat session, as well as the endpoint to collect feedback.
🚀 Deployment
Deploy the frontend Next.js app as a serverless Edge function on Vercel by clicking here.
You'll need to populate the NEXT_PUBLIC_API_BASE_URL
environment variable with the base URL you've deployed the backend under (no trailing slash!).
Blog Posts: