Calonz
Calonz12mo ago

Restricting RAG when using LangChain

@Michal Srb I have a question about the Convex with LangChain. What’s the solution for LLM to fetch only particular documents in db when it looks for answers when asked because when I try to load a new document, the LLM still remember and able to answer if I asked about the previous document. I’m not sure if that’s how it supposed to work?
2 Replies
Michal Srb
Michal Srb12mo ago
Hey @Calonz, I'm not sure, but this doesn't sound to be specific to using Convex with LangChain, so you could ask on the LangChain discord. Wish I had a better answer but despite writing the Convex<->LangChain integration I'm not an expert in LangChain. If you wanted full control over the "chain" you could ditch LangChain, similarly to this article: https://stack.convex.dev/ai-chat-with-convex-vector-search
Build AI Chat with Convex Vector Search
Convex is a full-stack development platform and cloud database, including built-in vector search. In this third post in our [series](https://stack.con...
Calonz
CalonzOP12mo ago
Thanks for the suggestion! I'll reach out to the LangChain for more specific advice. Also, I'll explore the option of ditching LangChain for more control, like the article you shared. Appreciate your help!

Did you find this page helpful?