jamwtJ
Convex Community2y ago
2 replies
jamwt

llama farm

The intrepid @ian recently created a pretty cool project that allows a bunch of random computers (your laptop, fly.io machines, whatever) running ollama3 to collectively run all the LLM work for a chat application.

All this work is coordinated using Convex as a kind of job queue. This allows the LLM cluster to be scaled up, and nothing needs to expose ports to the Internet to participate.

Pretty neat idea!

Tweet: https://twitter.com/ianmacartney/status/1787663174394876000 (follow @ian ! )
Hosted: https://labs.convex.dev/llama-farm
Teh code: https://github.com/get-convex/llama-farm-chat
Live demo of llama farm:
https://t.co/0aeOvbg6xj
- By default it's just you & 🦙
- Share the URL for others to join in.
- Streams to all users at once.

I'll add logins, rate limit, etc. if it gets traffic.
A couple 🦙's on @flydotio too: details in repo

Twitter

5/7/24, 1:58 AM

GitHub
Use locally-hosted LLMs to power your cloud-hosted webapp - get-convex/llama-farm-chat
GitHub - get-convex/llama-farm-chat: Use locally-hosted LLMs to pow...
Was this page helpful?