pull down to refresh

Often they just waste time and tokens, going back and forth with themselves not seeming to gain any deeper knowledge each time they fail.
This is the token-burning reality, which really sucks if you're paying by the token (or if your electricity is prohibitively expensive.)
LLMs do not make rustc go faster.
Great point. Also, even if you run all your integration tests at once in a massively scaled parallel environment, if you don't have one that takes a couple of minutes you're probably not covering scenarios properly. So you (or this magic LLM) has to context switch. But eventually this will not scale to 10x unless you don't do the intensive tests... which I don't think you can truly 10x from a non-LLM optimized state.
Nice references in the article: