pull down to refresh

Pretty good observation about vibe breaking continuity:
The first thing that failed was not accuracy. It was continuity.
Early on, the systems felt almost magical. Ask a question, get a plausible answer. Push a little harder, get something surprisingly sophisticated. It’s easy, in that phase, to assume you’re dealing with something stable.
You’re not.
What breaks first is state. The model slowly loses track of what matters. Not dramatically. Not in a way that throws errors. It just drifts. A variable name that mattered stops mattering. A constraint that was explicit becomes optional. A file structure that was once sacred turns into a suggestion.
I wonder if there is a word for this. Quality definitely seems to degrade as LLMs try to keep more information in their contextual memory.
I find that results are almost always better when you start fresh. Make the LLM forget its context. Have it re-read the relevant code and start from a fresh prompt, and you get better results.
reply
I wonder if there is a word for this.
Dementia?
Make the LLM forget its context.
Yes. I do this all the time. Compaction is death, simply because the compaction mechanism... sucks 1. It's probably a science all by itself to compact knowledge, and if there is ever a working brainzip -9 that is readable and indexable, then I really want that in my neuralink, lol.

Footnotes

  1. At least it is on cursor / cline / roo / claude code. Haven't tested codex or gemini cli that deeply, but these clients are all open source so we can bet on them all shining and sucking equally - no way this industry would let a competitor keep a moat in visible code.
reply
Oh yes, I've seen this a lot working in Java and LLMs. You start off fine, good beginning structure. However, once you start working on tweaks and pushing for detail, it forgets the original structure and creates whole new bugs. At various times I've seen it get lost and forget the original program in the first place. You fix that by starting a new discussion with the latest working code and build from there. Iteration is the band-aid to LLM AIs getting distracted.
reply
Happens in all languages. I've seen it happen in python and javascript too, even though those are the supposed languages of choice for LLMs today.
But you made me realize something just now: I haven't once tested Java coding with an LLM! Haven't even thought of it. hmmm.
reply