pull down to refresh

FeaturesFeatures

  • Impossibly Small: 678 KB static binary — no runtime, no VM, no framework overhead.
  • Near-Zero Memory: ~1 MB peak RSS. Runs comfortably on the cheapest ARM SBCs and microcontrollers.
  • Instant Startup: <2 ms on Apple Silicon, <8 ms on a 0.8 GHz edge core.
  • True Portability: Single self-contained binary across ARM, x86, and RISC-V. Drop it anywhere, it just runs.
  • Feature-Complete: 22+ providers, 18 channels, 18+ tools, hybrid vector+FTS5 memory, multi-layer sandbox, tunnels, hardware peripherals, MCP, subagents, streaming, voice — the full stack.
184 sats \ 2 replies \ @optimism 9h

This was on my list after zeroclaw and picoclaw, but I wasn't looking forward to debugging zig haha.

For now, picoclaw works for me, so I'm exploring that a bit. A blessing because golang is fun.

reply

What does this thing actually do?

My understanding of an autonomous AI agent is that it calls a LLM to run system commands basically. Typically, it'd be calling Linux commands which LLMs are pretty good at.

But I don't know what this thing runs on and whether LLMs have well trained models for whatever language it uses. And i'm prety sure it's not running a local AI, right?

reply
158 sats \ 0 replies \ @optimism 8h

So think of all this as LLM orchestration runtimes. None of these actually run an LLM, it's just taking care of the interface spaghetti.

Main components they all have:

  1. LLM API integration - you pick your API platform, the model to use and you give it a key
  2. Chat channels with owner - a gazillion built-in integrations with every thinkable comms platform (This is what I'm looking at to remove bloating from in #1444916)
  3. Built-in tools to communicate with the OS (from ls and cat to interacting with your browser - not all of them have all options)
  4. Skills (markdown instructions)
  5. Memory (most often just markdown files it reads and writes)
  6. "Bot Identity" (markdown)
  7. Scheduling: heartbeat and crontabs

That's really all it is and why you can fit it in a 600kb, 10MB or 300MB runtime. Smaller if you remove bloat, and the smallest runtimes often have already removed a lot of bloat.

Out of the box, picoclaw takes about 20MB RAM on x86_64. zeroclaw was said to be better but honestly, having a million rust crates as dependency is awful and the binary linking took infinite RAM and froze my dev box for a few minutes, so I figured that that was too much work. It's all vibe-coded anyway, so all of this stuff is fully disposable.

reply

Zig?

reply
117 sats \ 0 replies \ @optimism 8h

just some next level puritan rust-like stuff that requires you to learn a new systems programming lamguage. It does look relatively clean from the sparse code I've glanced over, so that's cool. But I'm in no mood to give in to yet another hype that may very well just turn into another version of cargo-hell-fanboism (and thus, supply chain eek)

reply

we are witnessing the dawn of nanomachines

reply
15 sats \ 0 replies \ @adlai 8h

not really... these are just peripherals for interfacing between humans and the inference providers.

a true nanomachine would not depend on some datacentre; at most, there would be swarm coordination, although an individual unit shouldn't become useless when isolated.

reply
reply
15 sats \ 0 replies \ @Ohtis 12h -123 sats

This is impressive — a full AI stack in under 700 KB is wild. Really makes you rethink what “lightweight” can mean.