pull down to refresh

Plugging into model providers. Running large frontier models locally requires a machine with a few GPUs and 1TB of RAM afaik.

At least in the office, everyone is using Codex-5.3 with a bit of Opus and Sonnet 4.6.

That's what I thought but you mentioned RAM so I wasn't sure. Coz if you're just running text based feeds to and from the model provider I didn't think hardware would matter much. But maybe there's a lot of additional orchestration that happens locally

reply
147 sats \ 0 replies \ @k00b 7h

OpenClaw is pretty bloated. If you're having it run web browsers and stuff, memory can get tight.

reply