pull down to refresh
It started the dev environment itself, pulled local SN up in it's little browser. That's already better than I imagined.
It's like "I can't pay for anything" so then it started spinning up our little lightning regtest network.
Then it realized a container needed to restart to use the lightning node.
lol it's going to town. It made a post, then a job, then a crossposted post. Then it created a second account, free comment and bio. (This is not through the browser and afaik it gave up on lightning and is printing regtest sql money via the dev shell script.)
Yea, the IBIT was definitely a gamble, but the situation just looked like the shorts were going to get squeezed. I probably won't buy calls too often, but when the entire market is screaming squeeze, I had to put my money in.
If you sell covered calls, just don't do it below your strike. That's the whole basis of the wheel, I'm just collecting premiums off of fear and volatility.
I was wondering how these AI bros were getting big boy bills, but I guess it's mostly by prompting "boil the ocean, I want to take a bath."
This is just one billing tick, all from this run afaik.
Vibe coding answer (used for personal efficiency)Vibe coding answer (used for personal efficiency)
I build my frameworks to be LLM-agnostic. Since Claude 4.1 I've mostly used Claude Code and built up a pipeline around that, but switching another LLM / coding framework is as easy as 20 lines of javascript "plugin" into an executor component, and changing some yaml. Since people were saying codex 5.3 is really good, I was meaning to take some time next week and give it some work.
Business answer (used for work that is often highly confidential)Business answer (used for work that is often highly confidential)
For work things I cannot use gpt or claude or gemini because they all involve giving a third party access to documents. So for that I actively pursue "the best" that I can run locally. Which means I often bench local models on a job, which in many cases means adding a different argument, and sometimes playing with prompts a bit, as especially in prompting, not all tuning works the same across models. For example, back in December I used more qwen3 and gemma-3(n). Now I use more jan-v3-base (which funnily performs better with half the param size)
@k00b I noticed this before but was lazy...
Since the massive refactor on the accounting system, "anon's stack" that used to go to rewards is gone in favor of 70/30 to territory/rewards.
This is not a complaint but I wanted to mention it regardless, in case it wasn't intentional.
Brah anyone?
I have no idea. I don't follow much LLM rankings. I always use the latest best OpenAI model (from their API) and try to get the most out of it.
Wow! That IBIT was a home run
Haha yeah my fold puts are going to be assign to me next Friday. But I might start selling some calls just to try it out
Aaaand the chain is absolutely dead haha
Do you think that this is why vanilla GPT, but also Qwen and Deepseek, that are trained to give less concise but more complete answers, are ranked lower on the ELO lists?
If this is the case, then GPT + structured output must beat at least Grok and Gemini on the same right now.
Old habits die hard. I'm retired. My daily responsibilities are dwindling. I'm on a relaxing vacation with little to worry about. But I still manage to stress about objectively unimportant things. I think I need to meditate.
Simplicity.
I pick one tool, it works and gets the job done. I stick with it. I never think of it ever. Done. The cost of switching tool is too high. And I treat LLM models the same as a text editor, a library. And the good one are just fine.
Further more, I don't need a .5 improvement on a model which doesn't help me much if I don't get a .5 improvement on my brain too.
What's your take on this? Do you often switch models and tooling around these models?
Currently trying to see if I can get codex to do some kind of QA before I do more QA. How well do robots dance?