pull down to refresh

Great article, thanks!
The only thing I disagree with somewhat is:
Always be part of the loop by moving code by hand from your terminal to the LLM web interface: this guarantees that you follow every process. You are still the coder, but augmented.
This is because the chatbot interface is used. But what if the "tool call" is/includes the LLM? I've found specific programmatic calls to LLMs, including post-call cleanup and processing, much more efficient than the generic chatbot interface for process automation. The user input can still be a prompt and the LLM can still have access to tool calls if needed, but for each token of instructions that is about tooling or context, i.e. anything other than just solving the problem at hand, is "distracting" and diminishes results.