pull down to refresh
Whatever old man. Go away and enjoy your money and die.
@delete in 24 hours
That's some funny reverse psychology shi. Like, "I bet you're not gonna cook me a 7 course steak dinner!" "Oh yeah? Just watch!"
This article ends with a call for more regulation
The main sin of a lot "moar regulation" analysts is that they go through in great detail the systemic incentives that result in a bad outcome, then they call for more regulation, without examining in great detail the incentives of the regulators, the distortionary effects of regulation, or the cost of enforcement
Man that is a dark thought. Insiders at the FBI / CIA betting on death markets and subtly engineering that outcome without accountability
A lot of events are interconnected though, and people will argue over that word "plausibly". and this adds another layer of uncertainty to the resolution criteria.
Not that I think that'll stop any degens from yoloing into the prediction markets
I feel like it would be very hard to police consistently. If you refuse to pay out due to someone being killed, what if a sports team loses a championship game because one or their players was killed? It seems impossible to draw clear boundaries on what events will be paid out and what won't
I want to invest in (1) too. I think in the short run it's still faster and better for me to do things hands on, maybe AI assisted, but with high interactivity
But I think in the long run, it will be a good investment to learn how to manage a fleet of assistants with a lower degree of needing to be in the loop
You made me rethink.
Yes, I am lazy, but not that lazy, and I'm also forward looking.
That's why I spend time on abstraction, because I want to allow myself a bit more laziness in the future
That being said, in my line of work (academic research, not production code), I've learned that for much of the abstraction, the juice is often not worth the squeeze.
Agreed on the vendor lock in stuff. The barriers for a small business to write their own bespoke software has gone down a lot
I thought laziness is a virtue in software design. Certainly, I can see how my own laziness is why I like to write things that will have more permanence and reusability -- so I don't have to do it again.
I venture to guess it matters a lot on how the people are using AI, and also what field.
I think there's a lazy way to use AI and a non-lazy way.
What does this thing actually do?
My understanding of an autonomous AI agent is that it calls a LLM to run system commands basically. Typically, it'd be calling Linux commands which LLMs are pretty good at.
But I don't know what this thing runs on and whether LLMs have well trained models for whatever language it uses. And i'm prety sure it's not running a local AI, right?
Why do you think the bot has such a hard time doing a basic task like checking the weather?