pull down to refresh

For instance, imagine if Elon Musk is using Claude to have a conversation, the answer to which might well be worth trillions of dollars of his new company. If he only paid you $20 for the monthly subscription, or even $200, that would be grossly underpaying you for the privilege of providing him with the conversation. It’s presumably worth 100 or 1000x that price.
While I enjoy this point, it brings up something tangential for me: It seems to me that you want to be the training data and you want to be the context. Things like Pay to Crawl or Paywalls or any kind of gating your content away from LLMs are foolish (unless you actually want it to be private). If you have the opportunity to inject your opinions or thoughts or arguments into training data for LLMs, why wouldn't you want to do this?
For instance, imagine if Elon Musk is using Claude to have a conversation, the answer to which might well be worth trillions of dollars of his new company.
Haha! I haven't really seen an LLM have the type of actual "idea" that Elon would need to be told... but do we really think that Elon doesn't have a DGX box beta version to play with? If NVidia isn't sending a new one each iteration that would be kinda dumb.
Anyway, these things just run anything you want on-prem without needing to be exposed to third parties. Sovereign compute, or gtfo. That's why it doesn't matter whether they will run ads, because by the time they do, the whole of SN has grown so tired of me talking about sovereign AI that you have it anyway, just to shut me up.
While I enjoy this point, it brings up something tangential for me: It seems to me that you want to be the training data and you want to be the context
You want to be the training data for your LLM, but I may not want you to be the training data for my LLM. This is easy today, just finetune an existing model.
reply