pull down to refresh
229 sats \ 13 replies \ @k00b 11 Dec \ on: Best AI Coding Tools? devs
@bitcoinplebdev cajoled me into using cursor which is excellent. @rblb has been having github copilot give him code reviews. We've also used code rabbit for code reviews. @bitcoinplebdev also recommends v0 which can build UIs.
To add on this, i've tried also cursor (thanks to @k00b), that is nice but I’ve noticed it is somewhat more opinionated about code compared to copilot.
In my experience, copilot feels like it is following you and just completing the code, while cursor feels like it is trying to anticipate what you will want to do several steps ahead. I prefer copilot, but I think it is a matter of personal taste and coding style.
I've been trying Github copilot reviews, that is another beta service, but it doesn't work very well , at least with our code base, but it can catch some oversights, sometimes.
reply
Thank you @bitcoinplebdev, I also love cursor now!!
reply
Cursor + Claude is literally all you need. I cancelled all other subscriptions except Cursor (as it includes all top LLMs), and even the infinite slow requests usually only take a couple seconds longer.
Use Chat tab for just asking questions or short coding prompts, and use Compose tab for complex prompts when you want agent to automatically edit files, making sure to @tag relevant files, or @codebase, for entire repo.
reply
The way I've been using it:
Inline edit: short edits right where you are in the code and follow up questions about it.
Chat: As you said, to ask more wordy questions and explanations, some coding and follow-up, and use features like
@web
to search web (and other such tags to add docs into context, etc.) which Composer can't do yet.Composer: As you said, most complex (and most capable/costly in terms of compute?) for bigger or extensive prompts.
My question to you as you seem experienced with it is for:
For Composer, since they released the Agent feature, I've been trying to figure out which option is best for what. Composer with VS without agent. Any tips?
reply
reply
I've been trying Normal and Agent back and forth and, while I find it hard to get a definitive read on it so fast, I tend to agree that Agent sometimes tries to think way too far ahead and gets a bit eager to dive in and mess shit up (even in a good way)
On a semi-related note on Cursor: I have a theory, or more of a hunch, that I've been meaning to test:
Using Chat to craft a prompt. Tell it issues, scope, context, documentation, and have it present you with a solution, but without necessarily coding. Maybe pseudocode or steps. Ask follow-up questions, ask why he did X or Y that way, and tweak some stuff ("do it that way, not this way. You forgot to handle X Y").
Iterate until he gives you a game plan and pseudocode that makes sense.
Feed that pseudocode to Composer (Normal or Agent) like "hey this is what we're trying to do and this is the game plan so far". Observe results.
My theory is that because that game plan was generated by AI, the wording and logic is already in "AI speak", with all its quirks and ways of writing, so it will understand what you want to do with more accuracy than typing with all our human-ness.
Note: most of this is bro science coming out of my ass. Would be neat to see if results get better doing things that way.
reply
yeah good call, probably I'm not giving the Agent detailed enough prompts, which it would likely do much better keeping on track and not fucking up the code, which is fine sometimes, like you say, often leading to solutions I would never have thought of. Just have to remember to commit often!
reply
I’ve heard good things about cursor. One day I’ll give it a shot
reply