pull down to refresh
That's what I thought but you mentioned RAM so I wasn't sure. Coz if you're just running text based feeds to and from the model provider I didn't think hardware would matter much. But maybe there's a lot of additional orchestration that happens locally
That's what you would take the Mac Mini M3 Ultra w/ 512GB RAM [1] for, or 4x M4 Pro with 128GB in a cluster, see #1360715 for the latter, which is perhaps the better setup (because you can add mac minis to it)
You'll run quantized GLM-5 (or Kimi K2.5 on a cluster of 8). Then you run your agent on a much lower spec box.
I'm still looking for a clone of openclaw that I can actually compile - maybe nullclaw, because with less sloploc the chance of it being unable to compile is lower 😂 Going to be "fun" diving into Zig tho, ugh.
🥺 I remember when my new computer (in the late 80s iirc) had 512kB RAM and that was a beast. ↩
Are most people you know running frontier models on the local machine? Or are they plugging into an online API service?