pull down to refresh
@Coyote_Cosmico
stacking since: #361857longest cowboy streak: 64
131 sats \ 0 replies \ @Coyote_Cosmico 21 Jun \ on: Trump’s Tariff Policies Are Schizophrenic econ
A centrally "planned" economy from the big guy 🇺🇸 🎉
A punk-ass answer: no, then yes, and it depends!
In my (limited) understanding, the word karma in the common Western vocab is a bit different from the concept in Hindu tradition.
It reminds me of how we use the word ego, which most English speakers would describe as something like a sense of self-importance or arrogance.
But we don't really have an English word that matches the one we would also translate as "ego" from some ancient South Asian text - which could be described as a sense of a self which is indivdual and separate from everything and everyone else.
Very different concept, but we use the same word in English. Similar to the word karma.
So karma, as it's usually used in English = fuck around and find out? If you do something "bad" then "bad" things will happen to you? No I don't see much evidence for that - not in that sense.
But coming from a lens of Hundusim or Buddhism I've seen karma described as simple cause and effect. If you do something "bad" that usually implies someone or something gets hurt and suffers, and that is your effect. A law of nature that is observable and measurable, like Newton's 3rd law.
The effect might be that something "bad" happens to you, but that's really subjective right?
Maybe the bad is that people are angry with you or hate you. That might bother some more than others, and it might be hard to notice, but it's still there. You might just feel guilty and suffer from that, or it may reinforce more of your own prickish tendencies which leads to other consequences down the track. Or you might really piss someone off and get punched in the mouth, which is more obvious.
So to me, karma is just cause and effect. Actions have consequences and ripple out, even if the consequences might not be so visible on the surface.
Wow! Nice share.
Anyone else notice this part of the article:
The mathematicians who participated had to sign a nondisclosure agreement requiring them to communicate solely via the messaging app Signal. Other forms of contact, such as traditional e-mail, could potentially be scanned by an LLM and inadvertently train it, thereby contaminating the dataset.
Maybe they were just acting this way out of an abundance of caution, but I was wondering, how would o4 scan their email? Or perhaps they were worried that another model could scan email, and then... publish something online which o4 could read?
I guess it's pretty safe to assume that everything typed into a keyboard will become training data at some point.
I'm afraid so.
I find myself prompting LLMs with ever-sloppier grammar and punctuation that I never would have used before (even incomplete sentences that don't make sense)
You too eh?
I sometimes don't even prompt it now, but just copy / paste in some text or a screenshot with "?" (or nothing) and it knows what I want.
Or... maybe... I don't even know what I want, and I've outsourced that as well. "Hey robot, think about this for me."
My spelling has deteriorated since autocorrect spread everywhere, and it's probably getting worse now because I don't even bother to spell correctly when using an LLM. I just speedmash the keyboard and it parses that sloppy input just fine.
Maybe there's a competitive edge to be found here for those who can avoid overusing these tools, as most "creative" output trends towards the mean.
Thanks for the wakeup call! Time to reread Nicholas Carr.
From his book, The Glass Cage:
“When an inscrutable technology becomes an invisible technology, we would be wise to be concerned. At that point, the technology's assumptions and intentions have infiltrated our own desires and actions. We no longer know whether the software is aiding us or controlling us. We're behind the wheel, but we can't be sure who's driving.”
...
“If we’re not careful, the automation of mental labor, by changing the nature and focus of intellectual endeavor, may end up eroding one of the foundations of culture itself: our desire to understand the world. Predictive algorithms may be supernaturally skilled at discovering correlations, but they’re indifferent to the underlying causes of traits and phenomena. Yet it’s the deciphering of causation—the meticulous untangling of how and why things work the way they do—that extends the reach of human understanding and ultimately gives meaning to our search for knowledge. If we come to see automated calculations of probability as sufficient for our professional and social purposes, we risk losing or at least weakening our desire and motivation to seek explanations, to venture down the circuitous paths that lead toward wisdom and wonder. Why bother, if a computer can spit out “the answer” in a millisecond or two? In his 1947 essay “Rationalism in Politics,” the British philosopher Michael Oakeshott provided a vivid description of the modern rationalist: “His mind has no atmosphere, no changes of season and temperature; his intellectual processes, so far as possible, are insulated from all external influence and go on in the void.” The rationalist has no concern for culture or history; he neither cultivates nor displays a personal perspective. His thinking is notable only for “the rapidity with which he reduces the tangle and variety of experience” into “a formula.”
Education, on a very fundamental level. Go peruse the r/teachers subreddit, it's pretty scary stuff.
AI is an incredible learning tool as an adult, but if I had access to such a thing in elementary/middle/high school I'm sure I would have abused it. Too early to tell what the effects of not putting in brain-reps when you're young will be, but I'd bet a few sats that it's not going to go well.
Have you ever read The Shallows by Nicholas Carr? It's a fantastic book on this topic. Also, Neil Postman's Amusing Ourselves To Death - but I liked the first one more.
I went to a bookstore the other day and saw like 5 or 6 newly published books with "digital detox" or reclaiming attention span themes. I think this is a growing concern for so many people now.
Maybe not so much for the digital native generations, but those of us old enough to remember pre internet times can feel that the rot is real.
I was just about to share the same link and SN showed me you'd already posted it.
I expected this article to be clickbait bs, but it was actually a pretty fascinating read!
Badass bro - I just reread your yoga guide yesterday, what a coincidence -- I gotta do more pushups and meditation
This is really great news!
I grew up on a farm and my dad would pay me $5 an hour to weed since I was a young kid.
It's incredibly labor intensive and to this day my 70 something father spends a good chunk of his time out there doing weed control in one form or another.
People forget how big of a deal our food supply is, how fragile the production systems are. Right now industrial ag is basically a machine for turning diesel fuel into human consumable calories, and the efficiency is 10:1 - so it takes 10 calories of fuel for 1 calorie of food on our plate.
This is some of the best news I've seen in a while, thank you 👍
If it's a rev share situation you're right 50 is probably too many. I personally don't care much about being a territory owner but just think some particular new ones should exist, so the 50-100 is better suited to that model where basically SN would own it and stackers contribute just because the territory seems like it should be part of the mix.
Thank you, it's great to see that! The "partially funded mutual territories" proposal looks a little different than what I pictured, but pretty close.
Yeah I see the issue with funding if it's just an open zap to donate function. Not sure how that was envisioned to work but if a territory becomes fully funded maybe the donate/ crowdfund function would be disabled?
I'm sure there are lots of challenges with this I didn't even consider.. the proposal also mentions "unregistered security" issue if the sat income is shared, I didn't even think about that but seems like an obvious problem now.
I like the model where it's just a trustless community fundraiser and the sat income just goes to rewards pool.
Yeah I figured it's a ways out, if ever, but wanted to plant or water the seed. Looks like something similar is already a formal feature request in guthub as posted in the comment here by @supratic