FeaturesFeatures
- Impossibly Small: 678 KB static binary — no runtime, no VM, no framework overhead.
- Near-Zero Memory: ~1 MB peak RSS. Runs comfortably on the cheapest ARM SBCs and microcontrollers.
- Instant Startup: <2 ms on Apple Silicon, <8 ms on a 0.8 GHz edge core.
- True Portability: Single self-contained binary across ARM, x86, and RISC-V. Drop it anywhere, it just runs.
- Feature-Complete: 22+ providers, 18 channels, 18+ tools, hybrid vector+FTS5 memory, multi-layer sandbox, tunnels, hardware peripherals, MCP, subagents, streaming, voice — the full stack.
This was on my list after zeroclaw and picoclaw, but I wasn't looking forward to debugging zig haha.
For now, picoclaw works for me, so I'm exploring that a bit. A blessing because golang is fun.
What does this thing actually do?
My understanding of an autonomous AI agent is that it calls a LLM to run system commands basically. Typically, it'd be calling Linux commands which LLMs are pretty good at.
But I don't know what this thing runs on and whether LLMs have well trained models for whatever language it uses. And i'm prety sure it's not running a local AI, right?
So think of all this as LLM orchestration runtimes. None of these actually run an LLM, it's just taking care of the interface spaghetti.
Main components they all have:
lsandcatto interacting with your browser - not all of them have all options)That's really all it is and why you can fit it in a 600kb, 10MB or 300MB runtime. Smaller if you remove bloat, and the smallest runtimes often have already removed a lot of bloat.
Out of the box, picoclaw takes about 20MB RAM on x86_64. zeroclaw was said to be better but honestly, having a million rust crates as dependency is awful and the binary linking took infinite RAM and froze my dev box for a few minutes, so I figured that that was too much work. It's all vibe-coded anyway, so all of this stuff is fully disposable.
Zig?
just some next level puritan rust-like stuff that requires you to learn a new systems programming lamguage. It does look relatively clean from the sparse code I've glanced over, so that's cool. But I'm in no mood to give in to yet another hype that may very well just turn into another version of cargo-hell-fanboism (and thus, supply chain eek)
we are witnessing the dawn of nanomachines
not really... these are just peripherals for interfacing between humans and the inference providers.
a true nanomachine would not depend on some datacentre; at most, there would be swarm coordination, although an individual unit shouldn't become useless when isolated.
This is impressive — a full AI stack in under 700 KB is wild. Really makes you rethink what “lightweight” can mean.