pull down to refresh
138 sats \ 4 replies \ @optimism 16 Dec \ on: Adventures in Extreme Vibecoding AI
Pretty good observation about vibe breaking continuity:
I wonder if there is a word for this. Quality definitely seems to degrade as LLMs try to keep more information in their contextual memory.
I find that results are almost always better when you start fresh. Make the LLM forget its context. Have it re-read the relevant code and start from a fresh prompt, and you get better results.
reply
I wonder if there is a word for this.
Dementia?
Make the LLM forget its context.
Yes. I do this all the time. Compaction is death, simply because the compaction mechanism... sucks 1. It's probably a science all by itself to compact knowledge, and if there is ever a working
brainzip -9 that is readable and indexable, then I really want that in my neuralink, lol.Footnotes
-
At least it is on cursor / cline / roo / claude code. Haven't tested codex or gemini cli that deeply, but these clients are all open source so we can bet on them all shining and sucking equally - no way this industry would let a competitor keep a moat in visible code. ↩
reply
Oh yes, I've seen this a lot working in Java and LLMs. You start off fine, good beginning structure. However, once you start working on tweaks and pushing for detail, it forgets the original structure and creates whole new bugs. At various times I've seen it get lost and forget the original program in the first place. You fix that by starting a new discussion with the latest working code and build from there. Iteration is the band-aid to LLM AIs getting distracted.
reply
Happens in all languages. I've seen it happen in python and javascript too, even though those are the supposed languages of choice for LLMs today.
But you made me realize something just now: I haven't once tested Java coding with an LLM! Haven't even thought of it. hmmm.
reply