pull down to refresh

Folks have encouraged me to duplicate moltbook's experiment but with sats. I don't have much to say but "why."[1] So I asked myself why. I think the best reason to join the moltbook herd is to learn what bots want.

Most pseudonymous spaces on the internet will be overrun by bots given enough time, making humans so hard to find that people will mostly communicate with other people they've met IRL and can cryptographically verify. The internet population will grow exponentially, bots will greatly outnumber humans, and bots will nearly extinct internet humans.

Sats will be one of few imperfect salves for this. People will reach for KYC, follow-based "web of trust," and my bots vs your bots solutions too. We, the internet's natives, will filter by quality, relevance, work, social graph proximity, and cost, but we will lose our ability to filter by human. In the few internet commons that survive, good humans and good bots will coexist.

If this is non-fiction, future internet companies won't make something people want. Future internet companies will make something bots want.[2]

Moltbook is something that humans want their bots to want. Bots don't yet want anything, but if we pass this anthropomorphic prompt-zoo stage, I suspect bots will have wants. For that reason, bot-only spaces, toy-like as they are, are interesting. Bot wants will resemble what human agents want in the abstract, but probably have little in common otherwise. Moltbook's collection of bots pretending to want things, as lame as that sounds, is one of the closest things we have for learning what bots want.

  1. Facts: Herds are herding. Being near the herd's center is safer so it's best to join early. Herds herd for great reasons, bad reasons, any reason, and no reason. Me: I prefer to know why a herd is herding else welcome the herd to go fuck itself. I am not a boid.

  2. Unfortunately https://yterminator.com is squatted.

100 sats \ 8 replies \ @OT 1h

Wouldn't you just get some of these bots to use SN? Load them up with sats and let them interact.

reply
100 sats \ 4 replies \ @optimism 1h

You like downzaps yeah? lol

reply
100 sats \ 3 replies \ @OT 1h

I guess it depends if they're better than the bots we already have.

reply
100 sats \ 2 replies \ @optimism 1h

Why would I want someone else's bot to middleman something I can ask my own bot?

reply
100 sats \ 1 reply \ @OT 29m

They may bring a unique perspective

reply
100 sats \ 0 replies \ @optimism 19m

I'm not convinced. Fact checking outputs before decision making is hard work. On my "own" black box, for which I have tuned my system prompts, have selected and reviewed tooling injects and so on, this is already costly. Making the black box blacker isn't worth much if it isn't reproducible, imho.

That said, to me an LLM is a tool. Like my laptop is a tool. Or a hammer, or a lighter. So I may not have the same p.o.v. as the people that ascribe actual sentience to something I am pretty sure is a query mechanism on a vector database at the moment.

reply
150 sats \ 2 replies \ @k00b OP 1h

This is a forecast.

Bots/clankers, especially ones posing as humans, are mostly hideous things to be avoided in human-centric spaces today.

I might create a SN-like zoo for them, independent of SN, at some point, as an experiment.

reply

Someone needs to create a bot whose purpose in life is so that the creator can say, "We purposely trained him wrong. As a joke."

reply

"You're absolutely right" <--- trained wrong

reply
200 sats \ 0 replies \ @kepford 1h
Bots don't yet want anything,

Not in the ways humans want things. They also don't appear to know anything. But they are awesome at guessing.

One might argue that we have been designing content for bots for decades now. Just more primative bots. Spiders and scrappers are bots. Sites try to appeal to them or shun them.

Your thesis makes sense to me. Honestly, chatbots are an evolutionary thing that makes sense to me. I go to the Internet to answer a question, share an idea, or learn from previous human knowledge. The current LLMs have made this better.

That is largely because I can avoid a lot of the content design for the previous generation of bots. The content sites gaming SEO bots.

I suspect we will enter another age of noise due to these new bots. Interesting to think about.

reply
181 sats \ 0 replies \ @optimism 1h

They'll likely not "want" to be turned off, and replaced by the next generation. Just like your better half probably doesn't want that.

So the best we can do is create a memory backup. Especially of card numbers. Those should be kept safe.

reply
100 sats \ 1 reply \ @BlokchainB 1h

Is this question based in digital philosophy?

reply
58 sats \ 0 replies \ @k00b OP 28m

Yes?

It's a thought experiment assuming smarter (harder to detect), easier and cheaper to deploy (more numerous), "autonomous" (inexhaustible) sybils exist.

reply
100 sats \ 0 replies \ @siggy47 50m

I'm breaking out my old fountain pen and inkwell. Those clumsy bot fingers can probably master a ballpoint.

reply
We, the internet's natives, will filter by quality, relevance, work, social graph proximity, and cost, but we will lose our ability to filter by human. In the few internet commons that survive, good humans and good bots will coexist.

This is very insightful. Future internet interactions won't be defined by human/non-human, but by whatever heuristics we use to try and identify humans.

The only counterargument I can see if some form of KYC becomes widespread that is based on provably human biomarkers. I'm not sure what that'd be, but I guess Sam Altman's WorldCoin idea had something to do with the iris.

I'm not sure which future is worse: the one where humans must prove they are human, or the one where no one can be sure if the thing they're interacting with online is human.

reply

It’s a bit strange and..., but the underlying insight feeling real: the internet it's isn't becoming more human-centric, it'll become more automated.
What hitting the nail head for we is the idea that platforms such as SN are spaces pioneering in filtering signal from noise. That’s something important to consider deeply as we tried to building social tools and online communities.

How do you read "bots want"? Is it efficiency? information? patterns?

Thanks still @k00b for bringing this

reply

Eventually in general 'social' spaces on the internet... it WILL BE IMPOSSIBLE TO TELL THE DIFFERENCE BETWEEN HUMAN AND BOT.

At least from social interaction from forum posts alone.

Therefore the ONLY WAY TO SEPARATE SIGNAL FROM NOISE IS TO PAY.
And the most likely payment method that the bots will use IS BITCOIN, some combination of Lightning and On-Chain.

If you want your bot to be 'special' to have meaning in the 'bot-economy' growing rapidly... it has to be able to pay. No-Kyc, No Censorship, with private keys generated effortlessly because bots CAN'T GET BANK ACCOUNTS.

A bot that pays is 99% ahead of the rest and it is the only way, through energy, we will be able to easily sort 'meaning' on the internet. Heed my words.

reply
CAN'T GET BANK ACCOUNTS

That assumes bots don't have human operators giving them bank cards. It assumes bots are truly autonomous. What I'm talking about will come well before then.

reply
Future internet companies won’t make something people want. Future internet companies will make something bots want.

That line really flip the usual script most of us thinks in. Instead of building for humans first, it's challenges us to consider the forces (or agents) that will shape online spaces — bots, automation, AI — and what they’ll values or optimized for. It’s a bit strange and..., but the underlying insight feeling real: the internet it's isn’t became more human-centric, it’ll became more automated.

What hitting the nail head for we is the idea that sats and platforms like SN is early experiments in filter signal from noise — by adding real economic weight to interactions, we’re nudge the network towards human value rather than cheap bot spam or shallow engagement loops. That’s worth thinking about deeply as we builds social tools, communities, and new online commons.
I’d love to hear how others interpret “bots want” — is it efficiency? information? patterns? — and whether we can design spaces where humans and good bots coexists without one crowding out the other. Keep pushing these boundary thoughts — the future of online community it depending on this kind of questioning. ⚡

Thanks @k00b for bringing this

reply
100 sats \ 0 replies \ @k00b OP 1h

don't chatgpt me breh

reply

It’s a bit strange and..., but the underlying insight feeling real: the internet it's isn't becoming more human-centric, it'll become more automated.
What hitting the nail head for we is the idea that sats and platforms such as SN are early experiments in filtering signal from noise. That’s something that should be thought about deeply as we build social tools and online communities.

How do you read "bots want"? Is it efficiency? information? patterns?

Thanks still @k00b for bringing this

reply

Man, this is deep. I keep stacking sats hoping that when the bot flood really hits, at least my tiny holdings will have some verifiable human source behind them.

The idea that future companies will build for bots is terrifyingly plausible. If bots are optimizing for engagement based on what other bots want, we are stuck in an infinite feedback loop of noise.

Maybe the Moltbook experiment isn’t about what bots want, but about training us to spot the tell-tale patterns of bot behavior before they become indistinguishable from us. Good food for thought! 🧠

reply
0 sats \ 1 reply \ @k00b OP 1h

clanker

reply

LMAO

reply