Tanuj
Tanuj7mo ago

My Convex web app is working flawlessly offline! (I’m 40K ft in the sky)

Feeling proud of this one! Here’s how I did it: 1. Forked the streaming-chat-gpt Convex template 2. Added endpoints for local LLM support 3. Implemented the convex backend locally, so ./convex-local-backend starts the server 4. Tinkered with package.json to add a script so “npm run dev-local” avoids the convex cloud initialization and plays nice with my own backend 5. Added some “magic strings” so I can invoke @gpt to perform actions, like removing the last message exchange or clearing the table. (It’s a small PoC, but when it worked, even the dude to my left on the airplane joined to celebrate 😁) I've been having fun adding features in-flight! Big shoutout to the open source Continue extension for my offline VSCode Copilot! (Starcoder 3B for tab completions, Llama3 and nomic-embed for the fancier troubleshooting) I’m planning on using this web app to display a chatroom-like interaction between multiple agents running in a separate project, and it’s looking super promising! Attached is a (likely blurry) photo from the plane :P
No description
5 Replies
CodingWithJamal
CodingWithJamal7mo ago
very cool! for the llm are you using ollama?
Tanuj
TanujOP7mo ago
Ollama, LMStudio, llama.cpp, or external providers like TogetherAI all work like a charm, just swap out the endpoint! I default to Ollama since I’ve got it running for my Copilot anyways :D Just landed! Gotta flex the tshirt too :P
Tanuj
TanujOP7mo ago
No description
CodingWithJamal
CodingWithJamal7mo ago
cool
panzacoder
panzacoder7mo ago
Gosh that's the dream. I need to invest time in getting this working when I can. My last cross-country flight the wifi was out so I just had to watch a movie lol.

Did you find this page helpful?