pull down to refresh

What does this thing actually do?

My understanding of an autonomous AI agent is that it calls a LLM to run system commands basically. Typically, it'd be calling Linux commands which LLMs are pretty good at.

But I don't know what this thing runs on and whether LLMs have well trained models for whatever language it uses. And i'm prety sure it's not running a local AI, right?

158 sats \ 0 replies \ @optimism 10h

So think of all this as LLM orchestration runtimes. None of these actually run an LLM, it's just taking care of the interface spaghetti.

Main components they all have:

  1. LLM API integration - you pick your API platform, the model to use and you give it a key
  2. Chat channels with owner - a gazillion built-in integrations with every thinkable comms platform (This is what I'm looking at to remove bloating from in #1444916)
  3. Built-in tools to communicate with the OS (from ls and cat to interacting with your browser - not all of them have all options)
  4. Skills (markdown instructions)
  5. Memory (most often just markdown files it reads and writes)
  6. "Bot Identity" (markdown)
  7. Scheduling: heartbeat and crontabs

That's really all it is and why you can fit it in a 600kb, 10MB or 300MB runtime. Smaller if you remove bloat, and the smallest runtimes often have already removed a lot of bloat.

Out of the box, picoclaw takes about 20MB RAM on x86_64. zeroclaw was said to be better but honestly, having a million rust crates as dependency is awful and the binary linking took infinite RAM and froze my dev box for a few minutes, so I figured that that was too much work. It's all vibe-coded anyway, so all of this stuff is fully disposable.

reply