Cursor.com uses
Cursor is quite good. Copilot has intermittent issues over time and can be slow. Supermaven was fast, but sacrifices quality. Cursor is surprisingly good so far, and it's seamless coming from upstream VSCode. Also "privacy mode" is huge.
19 Replies
Do you use Cursor for code completion? I haven't been able to feel productive with AI code completion. I usually just chat in a separate place and diff manually
I'm exactly the opposite, haven't been able to find a groove with chat. The completion with Cursor is the best implementation I've seen, very natural. Still judging quality.
how do you handle the problem of the code completion consistently giving you code you don't want, or just generating when you don't want it to be generating at all
I've been using Cursor for a while also, I've definitely gotten good value out of it
I'm the same as @erquhart, I get a lot more value out of inline code "completion" than I do out of the chat function
I glance at it before I have it write anything, and I have gotten a sense of the kinds of scenarios it's good at and those it's not. But it only needs to be ~80% correct to still save me keystrokes—I'm happy to have it fill out some wrong code and then correct it by hand
Yeah the completions aren't difficult to ignore for me when I know I don't want them. I've been using copilot enough to where it's just a rhythm now. When I'm figuring things out I instinctively ignore the suggestions, but when I know that what I'm about to do is something an AI can guess, it's usually there with the right answer.
Copilot is great for autocompleting Convex function boilerplate for me
However I’ve gotten burned so many times by it doing
q.eq(“fieldName”, args.fieldName)
instead of q.eq(q.field(“fieldName”), args.fieldName)
in a query filter and I always fail to notice 💀Haha, same xD
The biggest surprise with Cursor was how unobtrusive the customizations are for their version of VSCode. Did a doubletake when I first opened, even my splits were there.
glad to know I’m not alone 😅
Still confirming quality for Cursor, but Copilot's quality was pretty stellar. It got to the point where if Copilot was doing something I didn't want it to do, it was a pretty strong indicator that I was missing something.
It's like your dog barking "for no reason" lol
dog is like dude someone's outside in the bushes
I should try out Cursor again; I wasn’t super impressed when I tried it a year ago but seems like it’s changed a lot
right now, I just use Cursor for its diff feature
Also, is Claude 3.5 the only LLM that knows Convex? Seems not, based on your comments about Github Copilot and Cursor
which i am assuming do not have CLaude 3.5
Copilot picks up convex patterns really well
in my experience
I like Cursor's AI integration from a UX perspective, if it's quality can at least be on par with copilot, I'm good with that
But is there an LLM that Copilot or Cursor uses that was pretrained on Convex project repos?
I guess I don't know how we'd know since it isn't always specified what models are trained on exactly
But a repo that already has some code should be able to get good results, especially for AI that indexes your code. Although again, Supermaven does that and has 1 million token context and was still just really not very smart in general.
You'd quiz the LLM with prompts to ask about Convex. High token context allowance doesn't mean quality outputs though. I've read that keeping the context small is a key way to get a high quality output
So far, the only public LLM I know that is pretrained on Convex is Claude 3.5 Sonnet
The Cody extension in VS Code marketplace has access to Claude 3.5 Sonnet
We just need a convex LLM benchmark test
One thing I really like is that Cursor is figuring out when a change I'm making applies to other locations in the file, and does things like pop up a tiny "tab to jump" prop, or shows completions inline (not overwriting everything) across all visible code in the editor. Really intuitive.
cursor is my fav
slowly becomign a tool i can't live without