pull down to refresh

Simplicity.

I pick one tool, it works and gets the job done. I stick with it. I never think of it ever. Done. The cost of switching tool is too high. And I treat LLM models the same as a text editor, a library. And the good one are just fine.

Further more, I don't need a .5 improvement on a model which doesn't help me much if I don't get a .5 improvement on my brain too.

What's your take on this? Do you often switch models and tooling around these models?

200 sats \ 3 replies \ @optimism 8h

Vibe coding answer (used for personal efficiency)Vibe coding answer (used for personal efficiency)

I build my frameworks to be LLM-agnostic. Since Claude 4.1 I've mostly used Claude Code and built up a pipeline around that, but switching another LLM / coding framework is as easy as 20 lines of javascript "plugin" into an executor component, and changing some yaml. Since people were saying codex 5.3 is really good, I was meaning to take some time next week and give it some work.

Business answer (used for work that is often highly confidential)Business answer (used for work that is often highly confidential)

For work things I cannot use gpt or claude or gemini because they all involve giving a third party access to documents. So for that I actively pursue "the best" that I can run locally. Which means I often bench local models on a job, which in many cases means adding a different argument, and sometimes playing with prompts a bit, as especially in prompting, not all tuning works the same across models. For example, back in December I used more qwen3 and gemma-3(n). Now I use more jan-v3-base (which funnily performs better with half the param size)

reply

Two really different realities I guess! You must be learning a lot in that space. Is the gap big? Or is it just fine to work with smaller local models?

reply
100 sats \ 1 reply \ @optimism 7h

The gap is huge right now. I was planning to check out ironclaw (#1430496) with local models this weekend but since I was lazy yesterday I'm busy catching up on work lol

reply

Lol

reply