That's what you would take the Mac Mini M3 Ultra w/ 512GB RAM [1] for, or 4x M4 Pro with 128GB in a cluster, see #1360715 for the latter, which is perhaps the better setup (because you can add mac minis to it)
You'll run quantized GLM-5 (or Kimi K2.5 on a cluster of 8). Then you run your agent on a much lower spec box.
I'm still looking for a clone of openclaw that I can actually compile - maybe nullclaw, because with less sloploc the chance of it being unable to compile is lower 😂 Going to be "fun" diving into Zig tho, ugh.
That's what you would take the Mac Mini M3 Ultra w/ 512GB RAM [1] for, or 4x M4 Pro with 128GB in a cluster, see #1360715 for the latter, which is perhaps the better setup (because you can add mac minis to it)
You'll run quantized GLM-5 (or Kimi K2.5 on a cluster of 8). Then you run your agent on a much lower spec box.
I'm still looking for a clone of openclaw that I can actually compile - maybe nullclaw, because with less
sloplocthe chance of it being unable to compile is lower 😂 Going to be "fun" diving into Zig tho, ugh.🥺 I remember when my new computer (in the late 80s iirc) had 512kB RAM and that was a beast. ↩