pull down to refresh
Two really different realities I guess! You must be learning a lot in that space. Is the gap big? Or is it just fine to work with smaller local models?
pull down to refresh
Two really different realities I guess! You must be learning a lot in that space. Is the gap big? Or is it just fine to work with smaller local models?
Vibe coding answer (used for personal efficiency)Vibe coding answer (used for personal efficiency)
I build my frameworks to be LLM-agnostic. Since Claude 4.1 I've mostly used Claude Code and built up a pipeline around that, but switching another LLM / coding framework is as easy as 20 lines of javascript "plugin" into an executor component, and changing some yaml. Since people were saying codex 5.3 is really good, I was meaning to take some time next week and give it some work.
Business answer (used for work that is often highly confidential)Business answer (used for work that is often highly confidential)
For work things I cannot use gpt or claude or gemini because they all involve giving a third party access to documents. So for that I actively pursue "the best" that I can run locally. Which means I often bench local models on a job, which in many cases means adding a different argument, and sometimes playing with prompts a bit, as especially in prompting, not all tuning works the same across models. For example, back in December I used more qwen3 and gemma-3(n). Now I use more jan-v3-base (which funnily performs better with half the param size)