pull down to refresh

I don't know what your Psionic ML framework is. Is the source code somewhere? Is there more going on than "if we presume they are running our unmodified client software, then we presume they are doing what we asked?"

I ask because I find that part most interesting. Every other project I've seen claiming decentralized inference tends to punt the problem when, imo, that's where all the value is. It's kind of like decentralized money without a blockchain and double spend prevention and all that.

Psionic is our Rust ML framework, source here: https://github.com/OpenAgentsInc/psionic

So far it's a glorified Rust port of relevant inference code from ollama/llama.cpp/MLX and training code from prime/bittensor etc

Pylon is our NIP 90 service provider that uses Psionic, all in a single Rust binary

Pylon gets assigned a job (via our job dispatcher "Nexus" a glorified Nostr relay / NIP 90 client- in our main monorepo OpenAgentsInc/openagents), processes the job through our own inference engine (not making an HTTP call to local Ollama a lot of people do, which is easily spoofable), and will send it back over the network probably with some verification salts/hashes showing it came from a real Psionic inference. This isn't fully built out yet, we'll focus more on it once we get the DiLoCo run going and we have new models we want to run inference on

Separately from best-effort programmatic verification, our Nexus job dispatcher will factor in NIP32 reputation events: untrusted nodes may get less jobs assigned until they build up reputation over time

Lots to solve here but wethinks we have the right primitives for it: helps to fully control every part of the inference/training pipeline so it's all in binaries we write -- can build verification into any part of it

reply
942 sats \ 6 replies \ @k00b 9 Apr
Psionic is our Rust ML framework, source here: https://github.com/OpenAgentsInc/psionic

Awesome, thanks!

Lots to solve here but wethinks we have the right primitives for it

fwiw The point I'm trying to get across, and I make this point to anyone in this problem space (I've seen two other bitcoiner projects in this domain in the last month): I think the primitive that matters is verification, because it's the one thing no one has solved afaik. That's not to trivialize everything else. The default position of we'll engage in the verification arms race eventually might still allow someone to build up a position in the market with everything else on point. It's just that imo the eat-the-market winner will have solved this problem (if it's solvable) and, also imo, projects that aren't terrified/hyper-focused on that, should be.

reply

This is a thing with TeeML and Nvidias confidential GPU, I think the issue is those aren't in consumer hardware but rather enterprise class stuff like H100s, and there's hefty overhead

Short of that might be able to do audit polling for reputation

reply
101 sats \ 0 replies \ @k00b 10 Apr

My neckbeard demands bitcoin-like-scale for this, which does not happen with reputation, but that's probably retarded anyway.

reply

For now our approach will be to let others solve the core technical issue and absorb that into our code when we need it

For example last week this CommitLLM project came out with a proposed solution some people seem excited about

We ask Codex to audit their approach, compare to what we have already in Psionic, propose integration path etc.

That comes up with a decent analysis:
https://github.com/OpenAgentsInc/psionic/blob/main/docs/audits/2026-04-09-psionic-commitllm-adaptation-audit.md

May or may not proceed with that specific plan but will repeat the process whenever we need that level of verifiability, then port the code into Psionic and iterate as needed

Generally I don't expect verifiability to be a big enough selling point that people will prefer a different project over ours because they verify more than we do. None of the AI power users on my X feed care about Gensyn or any of the other projects prioritizing inference verifiability.

Not to trivialize its importance, just rather focus on network growth first & upgrade later. (Borrowing from Nostr's 'worse is better' playbook)

reply
222 sats \ 1 reply \ @k00b 10 Apr
None of the AI power users on my X feed care about Gensyn or any of the other projects prioritizing inference verifiability.

How many decentralized inference power users are there? What kinds of customers want such inference? I'm curious which use cases experience such an inference shortage that they'll pay for it even when unverfiable and unaccountable.

Not to trivialize its importance, just rather focus on network growth first & upgrade later.

That makes sense. Overcooking is worse than undercooking. I'm probably out of the target demographic because you wouldn't be doing it this way if probably-inference didn't have value.

reply
How many decentralized inference power users are there?

Sadly not enough to build a big business around!

reply