pull down to refresh

I’m curious if anyone has read about AI models having code taste? Is there a front runner when it comes to code taste and avoiding code smells/spaghetti code?

50 sats \ 3 replies \ @optimism 5h

For me, tight instruction aligning models like Claude or Gemini (not so much GPT/Qwen/Deepseek) combined with very concise hints files stating what to avoid and a constant awareness of context poisoning (small jobs with fresh context win almost always, only exception is when I don't accept a result) has worked okay-ish.

reply

Can you say a bit more about context? Do you “wipe” the models memory somehow?

reply
44 sats \ 1 reply \ @optimism 4h

Context is basically your in-chat history. When you're coding, your tooling generally injects something like an AGENTS.md and its analysis of what your code does into your prompt. Together with the "conversation" history, this is the context that gets analyzed together with your current prompt.

Do you “wipe” the models memory somehow?

Yes. This is the best practice. Like "start new chat" in a chatbot. See for example this from Anthropic's best practices doc that says you should do this.

I've extensively tested it and it works much better when you clear out the rubbish from context, and we've developed similar ideas around this for non-code chatbots over the past year on SN, see for example this thread where @SimpleStacker makes the case against ChatGPT memory.

reply

Thanks for your replies

reply