pull down to refresh
Are most people you know running frontier models on the local machine? Or are they plugging into an online API service?
That's what I thought but you mentioned RAM so I wasn't sure. Coz if you're just running text based feeds to and from the model provider I didn't think hardware would matter much. But maybe there's a lot of additional orchestration that happens locally
That's what you would take the Mac Mini M3 Ultra w/ 512GB RAM [1] for, or 4x M4 Pro with 128GB in a cluster, see #1360715 for the latter, which is perhaps the better setup (because you can add mac minis to it)
You'll run quantized GLM-5 (or Kimi K2.5 on a cluster of 8). Then you run your agent on a much lower spec box.
I'm still looking for a clone of openclaw that I can actually compile - maybe nullclaw, because with less sloploc the chance of it being unable to compile is lower 😂 Going to be "fun" diving into Zig tho, ugh.
🥺 I remember when my new computer (in the late 80s iirc) had 512kB RAM and that was a beast. ↩
thanks dude! I think I'm gonna wait 1-2 more months before I dig into this again...once it's a bit less technical and scary! lol
Which model are you using? Most folks raving about claw are using the latest frontier models.
You mostly just need to make sure you have enough RAM. 16GB or greater should be fine. Nearly any old computer should do - delete everything from it then start from there.
Setup is still fairly technical and it's best to have a local friend help you get setup - someone who can sit next to you and pair up.
Be very careful what you give it access to - either on your local network or your accounts in various places. I recommend not giving it access to anything except segregated/parallel versions of things.