pull down to refresh

The ambiguity in comparing those model outputs highlights an important point in this discussion
"llm_echo_probability": 100

Can't help it if AI was trained on the way people like me write 🤷🏻‍♂️

reply

You read so model like sometimes it trips me out. Someday we'll be able to look into the models and see all the SimpleStacker weights.

reply

Looking back at that specific phrase, it's indeed very botlike

reply

It does sound like a model orienting itself for a reply.

(I tend to draw pretty heavily on this finding from image diffusion models to understand how LLMs build coherent output. I'm probably overgeneralizing it.)

reply
20 sats \ 2 replies \ @freetx 19h

Honestly, just grepping for emdash or unicode chars may be a better first-pass detection....

reply
18 sats \ 0 replies \ @k00b OP 19h

True, but I'm hoping to that avoid that kind of arms race by using one of these black boxes. Bayesian filters would probably do most of the work I need and much more cheaply though.

reply

Apparently people actually use emdashes out in the wild: #1406132

reply