pull down to refresh

This post puts me in quite a pickle.
  • I love little examples on foundational things.
  • However, I'm not an expert in the topic, so I can't just eyeball it and see if it's correct.
  • It's probably written by an LLM, which means that, in addition to setting a bad precedent, it could be filled with all varieties of subtle horseshit.
What does one do?
Beyond this particular post, how does one think about this larger topic? We've talked before, in various places, about dealing w/ LLM comment-spam; but what about stuff like this? And even if @r3drun3 lovingly wrote this post solely from the contents of their own expertise while curled in bed with a cup of chamomile tea, how does one deal with a reality where you don't know if it's real or manufactured, and, if it's manufactured, whether it might be mostly right but crucially wrong?
I guess this is just a very concrete example of an issue we're going to have to sort out, on SN and in civilization as a whole.
It's probably written by an LLM, which means that, in addition to setting a bad precedent, it could be filled with all varieties of subtle horseshit.
Any post can be filled with all varieties of subtle horse poop. So why does using an LLM a bad precedent?
how does one deal with a reality where you don't know if it's real or manufactured
Lots of real things are manufactured
if it's manufactured, whether it might be mostly right but crucially wrong?
That risk exists whether or not it is manufactured. The only way to distinguish error from truth, absent any infallible authority, is through some degree of research and careful thought.
reply
Good comment, good objections.
Any post can be filled with all varieties of subtle horse poop. So why does using an LLM a bad precedent?
Because generating horseshit at 100000x velocity renders intolerable things that could be tolerated in lesser doses.
That risk exists whether or not it is manufactured.
Yes and no. Yes, in theory, the standard for an LLM-generated thing ought to be the same standard as for a human-generated thing: is it useful, entertaining, or whatever you're looking for. In practice, nobody behaves this way for almost anything in real life. In the same way that the purpose of sex is not simply orgasm, but some kind of connection with another being, the purpose of most utterances is not restricted to the truth value of their postulates. Something more is both sought for and implied when we communicate with each other. Astroturfing w/ AI "content" violates that implicit agreement.
A less fluffy refutation is that most human concerns (e.g., things that are not purely technical; but even some things that are purely technical) are crucially interlaced with tacit knowledge that the speaker doesn't even know that she possesses. In other words: a real person talking about a non-trivial real-life experience brings in experience that would be hard to describe or enumerate but that crucially informs the interaction. The absence of these illegible elements is harder to detect than whether some program compiles or not, but it matters. (And note, this is true of even the hardest of engineering disciplines. The research on tacit knowledge / expertise is clear on that account.)
The only way to distinguish error from truth, absent any infallible authority, is through some degree of research and careful thought.
See above; but also: the truth, the way we usually use the term, is not the only thing at issue.
reply
1193 sats \ 1 reply \ @ek 4 Jan freebie
This. I think LLMs are just making it more obvious that we've been living in the Age of Disinformation for a long time already. And making this more obvious might actually be a good thing.
If I think something has been written by an LLM, it's usually because it's boring, sounds generic and has other flaws.
So the problem I have with LLM writing is that it's usually boring, sounds generic and has other flaws. Not necessarily that it was written by an LLM. But if someone pretends like they have written it themselves but they actually used an LLM, that gives extra unsympathy points. As a human, I don't like to be deceived - especially not in such a low-effort manner.
I think the main problem with bots currently is that they don't tell you they are bots. That's deception and as humans, we're entitled to feel deceived which is a negative emotion.
reply
For now, anyway. No reason a year from now LLM output won't be producing much more creative and aesthetic prose.
The vast majority of human writing, bitcoiner or not, is pretty generic, boring, and with other flaws...
This topic bums me out, because the kneejerk reaction to a firehouse of quasi-information is reverting to a model of "trusted sources." Which usually means: sources which other people seem to trust.
The information nodes become centralized, DYOR becomes a potential hazard, and the climb is steep for fresh contributors to gain enough clout to break through the filter.
It at least feels easy to spot llm content now, so most people aren't retreating into closed circles just yet - but it seems like the direction we're headed.
I wonder if llm's could produce content that doesn't have those "crucially wrong" bits, how comfortable would people be knowingly consuming it then? Is it truly the potential errors, presented with certainty, that creep people out? Or something deeper?
Humans are still the most abundant and confident source of "mostly right, but crucially wrong" information. So to me, llm's have just further surfaced some of the challenges which arise when everyone has a printing press and distribution.
I don't see the dust settling on this issue in my lifetime.
reply
The information nodes become centralized, DYOR becomes a potential hazard, and the climb is steep for fresh contributors to gain enough clout to break through the filter.
It's a good point, but, as others have pointed out, I don't think it's a reversion to centralization so much as a more manifest expression of a truth that was there already. Who, realistically, can possibly do their own research on virtually any topic in modern life? Such "research" amounts, almost invariably, to appeals to some social authority. Which sounds bad, but there's really no other solution. The meta-skill of figuring out who to trust is an exercise in doing your own research. But it's a weaker exercise than people imply with their chest-thumping.
Humans are still the most abundant and confident source of "mostly right, but crucially wrong" information.
I think you're right, at the time of this writing. I think you will be extremely wrong before the year is out. The fact that the bullshit asymmetry principle has just blown up by several orders of magnitude should fill us all with fresh horror and philosophical unease; but I don't know what more productive action it should prompt.
reply
1245 sats \ 0 replies \ @Scoresby 4 Jan
There's no getting around work (and whatever evidence we have of its proof). When we feel like someone has put a lot of effort into something, we feel they are being genuine and I find myself far more willing to engage with them.
When somebody posts a chunk of text produced by an llm I feel like they didn't put any effort into it and I'm less willing to engage/zap.
(Maybe you can say that the creators of the llm put a lot of work into creating the tech or training it, but in most cases I think we are seeing posts by people who didn't personally do those things and haven't invested very much work in the words they are posting.)
Now, it can be hard to tell the difference, and I suppose that's where the problem is.
My rule is if it makes me think llm, even just a little, it probably is. I might finish the post, but in my mind they've got a strike. Three strikes and I am pretty unlikely to look at posts by that user.
As a side note, I'm not so worried about the veracity/accuracy of a post. I'm very willing to forgive some inaccuracies in a post that feels genuine than makes me think, than I am to feel good about someone who copies a very accurate article from somewhere else and posts it as their own.
reply
1110 sats \ 0 replies \ @PlebeiusG 4 Jan
We already live in a world with news where it's MOSTLY fucking bullshit, extremely biased, and manufactured with POOR incentives.
I do my best to NOT follow the stupid fucking news...
We have been living in this sad reality for YEARS already...
All you need is to be GROUNDED to the truth. You can't so much do this with the news... as what politicians say behind closed doors is just not available to you. But we can ground ourselves to the truth when it pertains to scientific evidence (in the limit). Books need to be open sourced, DRM-free and we NEED open-source models that we SELF-HOST in order to parse this modern digital library. Books also have errors in them, but we can correct for errors so long as we have a plethora of information and the majority of it leans towards truth.
We absolutely cannot let AI be built behind closed doors, behind closed source, with "copyright" training materials we "aren't allowed" to access and be locked out of these platforms because we "misbehave" or don't have enough fiat to pay for it.
reply
1337 sats \ 0 replies \ @k00b 4 Jan
I'm torn on these too. I don't like them because such AI information delivery people are dishonest about the origin and inconsiderate of the accuracy and brevity of what they deliver. Their intent is often solely to extract value, whether as attention or sats.
On the other hand, they can deliver some incidental value even if it's debased by their intent.
Ultimately, my feelings are concerned with intent more than anything else. If Leibniz's discovery of calculus took more effort than Newton's, I wouldn't value it higher than Newton's. Probably the opposite. So my feelings aren't about "proof of work" at least. Proof of work is a proxy for intent where intent can't be measured. Yet, I don't really need proof of intent either. I want something valuable and proof of intent is a proxy for verifying I'm receiving something of value.

Ideally we are skeptical of all information sources, llm or not. The best information sources assist you verifying the results for yourself. One way you can tell this LLM content from human sources is it doesn't attempt to do this or have the self-awareness to measure and report on its authority.
reply
171 sats \ 0 replies \ @r3drun3 4 Jan
I can assure you that the post is the result of my research and testing with various Bitcoin libraries, in this case btcd (golang).
I utilized ChatGPT to format the final draft, but all other aspects are the product of my own efforts.
If you have any doubts, I encourage you to ask ChatGPT to generate a post with the same content as mine and verify its quality and accuracy, including confirming the functionality of the code.
Having said that, I acknowledge that the issue you raised is indeed valid, and it is something that concerns me as well.
Therefore, well done, and get my satoshis!
Sincerely, @r3drun3
reply
Oh I love this.
Humans can also post pure biased and erroneous shit. If you do not DYOR - no matter what it's about - you are going to be lead astray eventually.
Personally, if I'm going to dive into a topic, I should be learning and absorbing as much info as I can from all sources, so that when a new bit of information or a new article pops up in front of me (no matter if it's pooped out by a human or by a bot) I better be gaining some intuition so that I can do my own bullshit analysis OR be able to DMOR and pick it apart myself.
But, if I'm just a "surface level" guy and am just reading for fun then it doesn't really matter. The more technical something is, the more it's locked up and depends on my deep understanding to engage with.
In short... we just use the Socratic Method, always Ask Better Questions, never stop learning, expect us to get things wrong (our fault or not) and just keep moving onward.
reply
How do you know it is written by an AI? I know nothing about Golang programming, so I can't tell.
reply
225 sats \ 2 replies \ @k00b 4 Jan
It's mostly the writing.
Understanding Bitcoin scripting is pivotal for anyone venturing into the world of Bitcoin development. The provided Go code serves as a practical example, showcasing the creation and disassembly of a Bitcoin transaction script.
This isn't how humans write normally. It's the average of how humans write.
reply
Uhm... I'm not sure because if I had cooked over a post like 2 or more revisions I think I could have written something like that. Maybe not for a social media post.
reply
It's not that the writing is unlike how a human writes. It's that it only writes this way by default. Human writing is incredibly varied and within the same piece of writing - degrees of ambiguity, style, substance, etc.
LLMs exhibit abnormal levels of cliched language and self-evident statements, and things are explained as if they are a matter of fact.
reply
I tend to think that AI-generated content by itself is not a problem for SN.
That's because (unlike one other notorious website), "upvoting" content has a cost. We will send sats towards valuable posts and they will naturally rise above the noise.
And if someone finds particular content worthy for them, does it really matter who or what created it? (That's the "big question" for humanity to answer.)
What we should stop doing is zapping with trivial amounts (like 10 sats) everything that looks like good content (just because it's proper English and long form) and only zap things that we find individually valuable.
Now, there is a potential problem of bots manipulating the system to promote its own garbage. But maybe it's not a problem, after all SN allows you to boost your own content, and as far as I know, having multiple accounts is also acceptable - we trust that the money signal sorts out value from chaff.
reply
Wow, the comments on this post are fucking dynamite. I just wanted to blanket say that, for anybody who comes back and reads this. Thanks.
Although you guys are bankrupting me.
reply
0 sats \ 0 replies \ @ek 4 Jan
how does one deal with a reality where you don't know if it's real or manufactured, and, if it's manufactured, whether it might be mostly right but crucially wrong?
I think the Gell-Mann Amnesia effect is related to this:
Media carries with it a credibility that is totally undeserved. You have all experienced this, in what I call the Murray Gell-Mann Amnesia effect. (I refer to it by this name because I once discussed it with Murray Gell-Mann, and by dropping a famous name I imply greater importance to myself, and to the effect, than it would otherwise have.)
Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.
In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.
That is the Gell-Mann Amnesia effect. I'd point out it does not operate in other arenas of life. In ordinary life, if somebody consistently exaggerates or lies to you, you soon discount everything they say. In court, there is the legal doctrine of falsus in uno, falsus in omnibus, which means untruthful in one part, untruthful in all. But when it comes to the media, we believe against evidence that it is probably worth our time to read other parts of the paper. When, in fact, it almost certainly isn't. The only possible explanation for our behavior is amnesia.
So this means we always had to deal with this to some degree.
But it's getting worse, yes. With LLMs and other similar tech, disinformation in all shapes and forms can now basically be fabricated at will.
reply
deleted by author
reply