pull down to refresh

I don't know how to understand ai slop.
I think that there's no difference between human slop and ai slop, if I'm honest. I feel that the article, in stating "Slop [is] born of effortless, replicable processes", is hinting at that too, as it doesn't make it exclusive about AI slop, but more... all slop. AI just makes it effortless and replicable to do something that before required effort for most people: "talking out of one's ass" before didn't often show textual eloquence, so it's harder to detect, but it still happened, and this is what for example scammers exploited.
is some stuff better just because a human put some time into it?
It cannot be "time" and not even "human" that is a guaranteed means for quality, I think that this is our tribal instinct talking to us because we're perceiving a threat of sorts. I find it an interesting thing to observe because I expect that how humanity reacts to AI could be similar to how humanity would react to alien contact.
This gets at whether what we call "feeling" actually comes from some innate knowledge that it's scarce. .. Is that what we mean by lacking feeling?
I think that the feeling part is what triggers our actions and that the underlying recipe for this is self-awareness, which autocorrect definitely doesn't have 1, combined with finite time which means we are constantly fighting the impeding end of self. This is what makes us take action. An AI (current generation) does not have any concept of time or even "end" either, because it is a piece of software.
Even if we would run the AI software in a loop so that it can be triggered by something, currently it doesn't make choices, because it doesn't have the underlying concept of feeling to be rewarded or punished. It nowadays emulates this in a reasoning process, but that too is simulation, and pre-programmed.
However, just because a person that has feelings spent some of their finite time on something doesn't make it better. In fact, because time is finite for a human I'd argue we're more prone to automate everything (make everything effortless and replicable, in the words of the article.)
There can also be a moral high ground in the automation: I automate my business emails (which I judge to be less important) so that I can spend my newly saved time to properly home-school my kids (which i judge to be more important)...
PS. What do you think of Voskuil's theory?
I think that the observation is right, but that shitcoins aren't bitcoin, so it's not an apples-to-apples thing. The fact that these networks have their own native assets isn't really optimal. What I find interesting though: I've over the years met some shitcoin devs at miner events and some of them actually agreed with me that, paraphrased, flooding the markets with low-effort tokens is also a form of slop.

Footnotes

  1. "alignment" makes LLMs simulate self-awareness through reinforcement training and system prompts, so that in the interaction, a human perceives it as self-aware, but this is a trick to comfort humans, not an actual feature. Fake it until you make it.