pull down to refresh
100 sats \ 0 replies \ @optimism 14h
Great article, thanks!
The only thing I disagree with somewhat is:
This is because the chatbot interface is used. But what if the "tool call" is/includes the LLM? I've found specific programmatic calls to LLMs, including post-call cleanup and processing, much more efficient than the generic chatbot interface for process automation. The user input can still be a prompt and the LLM can still have access to tool calls if needed, but for each token of instructions that is about tooling or context, i.e. anything other than just solving the problem at hand, is "distracting" and diminishes results.
reply
100 sats \ 2 replies \ @PictureRoom 10h
There seems to be a similar look in design with all of these vibe coding LLM's and such.
reply
0 sats \ 1 reply \ @optimism 2h
You mean when it prototypes a web app?
reply
0 sats \ 0 replies \ @PictureRoom 18m
Yeah exactly
reply
0 sats \ 0 replies \ @023c1ba9f0 3h freebie
Just finished reading antirez’s “Coding with LLMs in the Summer of 2025” — really solid write-up. Appreciate how honest and grounded it is.
There’s so much noise out there about agents doing all the coding for you, but this piece brings it back to reality: LLMs are amazing tools, not magic. They help you move faster, catch bugs earlier, and explore design space way more efficiently — but only if you stay in control and guide the process.
The idea of pairing your instinct with the LLM’s PhD-level knowledge is 🔥. Totally agree that you need to feed it proper context, give it structure, and still be the one steering. Otherwise, you end up with bloated, brittle code.
Big thanks for sharing your workflow and mindset — it’s honestly the most useful take I’ve read in a while.