Musk warned that Trump wouldn’t bless OpenAI data-center project unless his xAI company was added
After Musk’s complaints, Trump and U.S. officials reviewed the deal terms and decided to move forward. The White House officials said Musk didn’t want a deal that seemed to benefit Altman. Aides discussed how to best calm Musk down, one of the officials said, because Trump and David Sacks, the president’s AI and crypto adviser, wanted to announce the deal before the end of the president’s trip to the Middle East.
The question I think of when I read stuff like this is why does the President or government have ANY say in whether a company does business in another country. The articles about this type of thing never seem to ask this question. They focus on someone is using their political influence to block or hinder competitors. To me this is obvious and should be expected if you centralize power in the state. Those that hate this sort of permission requirement might ignore the politics for a while but they do so to their demise. Those that have no moral problem with it will just do it to everyone else. This happened with Microsoft back in the day. Its happening with the bitcoin ecosystem now. Rent seekers are the problem. Not those paying them off.
There's a VERY common dumb take that people put forward. The take is that Trump and Musk are corrupt and we must vote for someone to replace Trump. This ignores that what they are doing while it may be slightly worse (I kinda doubt this btw) than the norm is basically how things have worked with the relationship of private companies and the state for a very long time. Remember the campaign finance reform movement? Yeah, that really went somewhere. So what is the solution?
I guess we really just need a John Galt type approach to break the system. But the types that would be the John Galt are simply trying to use the system to their advantage or best case in a defensive way. I can't say that I blame them.
-it
versions of Gemma. Most of the stuff that brings these "great leaps" in chatbot experience is people writing system prompts and filters. Or did you think Google did a whole training from scratch to stop search assistant from literally quoting reddit trolling? They just tuned some weights, and if you look at the recent claude system prompt (#990468) you see that this is still where the "magic" lives: text instruct, lol.qwen3:8b
or, if your laptop is more modern than mine,:32b
locally and see how rapidly the emulation has evolved post-deepseek v3. It may still loop or get stuck on saying what relates to technically nothing with the small models sometimes, but this is like a 10x vs llama2, which is a year old.ACK
on the first AI-written contribution ever to one of the o/s repos I maintain today (not written by me, I'm just reviewing it, and testing it 20x to make absolutely sure.) The caveat is that the repo in question is a python module and although it could use a follow-up pull req to do a little more cleanup, the job was done, because it was a small change. Wouldn't fly on c++ or even golang, but for python, it seems LLM starts to be tuned enough to the point it can actually deliver really small things (that would have taken me an hour to fix manually or so, but still)