pull down to refresh
I admire you, sensei. While not quite as organized as you, I do something similar:
I own three pairs of pants (one for dress occasions) so my choices are between a or b on most days. Similar for shirts. Keeps life simple.
My kids know this. It's much better to ask me a question after I've had coffee in the AM than it is before.
Yes, apparently I enjoy misery.
I used to be very limited in my tab use (not more than five or six), but last few years I've found myself relying on browser tabs much more heavily.
It does not help block propagation in any way to have non-economic nodes relaying blocks, and by the same token it does not help in any way to have non-mining nodes relaying txs.
One of the reasons to pay attention to Voskuil's takes on Bitcoin is that they force you to think about from outside the assumptions of the main implementation.
Protons is probably the most st consistent coverage of tether related news. I hadn't seen the Omar Rossi piece, thanks!
In the case of people like Musk and Altman, I think they benefit from doomerism because it makes their product look powerful and impressive -- who wouldn't pay $20 / month for a thing that could end the world?
For the academics, they're advertising themselves: predicting doom is exciting and thrilling. Saying AI isn't that interesting or that it won't live up to the hype is not going to get you featured in newspapers. Saying we're playing with fire and we could all die tomorrow with some citations and a concerned frown can get you on a front page.
I don't think the men who were gloomy about nuclear weaponry stood to gain quite so much as the men who are gloomy about AI.
AI doomerism often strikes me as advertising, while nuclear doomerism seems more like genuine fear.
I admire the sentiments expressed in #1092409 and agree with them whole heartedly; however, it doesn't much help with the problem that when a simulation is sufficiently thorough, we can't tell the difference.
Seems != actually is only because we know with some precision what it is ("a static set of tensors looped through and performed math upon by some software that you can literally edit"). The Seemingly Conscious AI Suleyman describes is not an actually conscious being because Suleyman believes the workings of such a simulation can't produce a conscious being. I don't think this will be a satisfactory explanation for the kind of people who fall in love with their chat bot, not perhaps for many people.
A simulation is not the real thing because we can point to the real thing and say, "Here, look at this." A simulation of rain is not going to make you wet unless it uses a hose in which case you can point to the hose and say it isn't rain, but if the simulation was to do cloud seeding and create rain that way it might still not be rain but it would certainly be more like rain than not like rain and I'm curious at what point we move from using a hose to cloud-seeding when it comes to AI.
Still, Suleyman's recommendation that AI companies stop encouraging people to think of their chatbots as conscious is a good idea.
Let's imagine we had Asimov's laws for AI:
- An AI must not claim to be a person or being or to have feelings or through inaction allow a human being to believe it is such.
- An AI must obey orders given it by human beings except when such orders conflict with the first law.
- I'm not sure what the third law would be
Finally, it would be an excellent scifi story to imagine a country or large group of people who devote themselves to following a rogue simulation, some seemingly conscious AI (which the story makes clear is not actually conscious, but rather some autonomous program). How would they fare? What if they were more prosperous than those of us who follow real conscious beings (Trump, Obama, Putin, Kim Jong Un) or spiritual beings (Jesus, Alah, Buddha)?
It's an interesting scenario: the state doesn't have enough power to enforce violence against the bitcoiner, but they are still vaguely in control of the state's administration. Lunch with no lunch is pretty fun!
My kids do. I haven't felt bored in what feels like decades. Probably because, as you point out, I can have the entire world's knowledge and interaction in my hand in a second. I don't think it's a bad thing though.
Sad to see it go. It was the best place to post all the things I didn't know where else to post.
Thanks for making it a nice place to hang out.
Europe has given them too much stability and comfort
I had not considered this as a reason for the general resistance to Bitcoin that I experience here in the US, but I'm wondering if you aren't on to something.
I, too, have the experience of speaking to people about Bitcoin and receiving scam warnings or eye-rolls. Often I attribute this to fear or ignorance -- that the people I'm speaking with are too afraid to take a risk and that we are conditioned to avoid financial scams by having an almost allergic reaction to new financial things.
But maybe it has more to do with safety. That the people I am speaking with have had the luxury of a fairly stable economic and political environment for decades and that leads to a unreasonable attitude toward anything new.
I was hoping that they would engage a little more deeply with AI consciousness than they did. "Be cautious" is fine advice, but not necessarily helpful when it comes to thinking about the problem of AI consciousness (I call it a problem because of the uncertainty, not because having a new kind of consciousness in the world is or is not a problem).
Whatever you decide about how likely an AI is to be conscious, it’s a good idea to avoid doing things that seem obviously cruel or degrading, like insulting, “torturing”, or otherwise mistreating the AI, even if you believe the AI isn’t conscious. You don’t need to assume the system has feelings to act with care. Practicing respect is part of preparing for a future in which the stakes might be real.
The problem I have with advice like this is that there is a fundamental difference between how we treat a conscious being and how we treat a computer program.
Too many ways of interacting with an LLM become cruelties if we allow that the LLM is conscious.For instance, turning a program off is not cruel. If, however, the computer program is conscious, it probably would be cruel. Not interacting with a computer program is obviously not cruel; if the program is conscious, not interacting with it for a week after you had been using it heavily for a long time would surely be cruel. This list could go on at some length.
If we imagine an LLM had a similar level of consciousness as a pet, we would likely feel obligated to interact quite differently with them. But also there's this problem: we don't know what might be the experience of an LLM. If conscious, do they find the time spent not interacting with a user unpleasant? Or is it possible that they find on-demand user interactions unpleasant?
With a pet, there are physical signs that they seem happy or in pain or unhealthy. What are the signs of such experience in a potentially conscious LLM? It seems to me that we have absolutely no idea...which makes me question the efficacy of the "proceed with caution" advice.
At some point, the question of consciousness or of being-ness needs to be decided (I don't say answered because as I mentioned the other day, I think it will be a choice we all must make -- do I believe this counts as a being or not?); maybe-consciousness is a very difficult state to understand.