pull down to refresh

I will kindly and sincerely say: FUCK OFF AI
reply
Individuals pay $10 a month for the AI assistant. In the first few months of this year, the company was losing on average more than $20 a month per user, according to a person familiar with the figures, who said some users were costing the company as much as $80 a month.
Is anyone a little more familiar with how these modern models work? My undergraduate understanding of neural networks is that costs are all upfront, incurred while training them, not running them.
I initially assumed they were averaging initial training costs across customers, but if the costs are variable something else is going on. Is it the cost of modeling/biasing models using the customer's context?
reply
Training is definitely costlier than inference, but inference is not costless either, especially if you are fielding millions of requests
Moreover, to maintain a competitive edge I would assume that the models are constantly being fine-tuned, not to mention the fixed costs of maintaining highly specialized and in-demand engineers on staff... I can easily see how costs add up
reply
Training is definitely costlier than inference, but inference is not costless either, especially if you are fielding millions of requests
Oh for sure. My knowledge is dated but once upon a time it was thought you could ship trained models to clients and run them there without specialized hardware.
Moreover, to maintain a competitive edge I would assume that the models are constantly being fine-tuned, not to mention the fixed costs of maintaining highly specialized and in-demand engineers on staff... I can easily see how costs add up
If this is all there is to it then some customers are performing 4x more inference requests than others which tracks.
Maybe what I'm not accounting for is the size of these models. If they are enormous with many many weights, then scaling inference could be super-linear.
reply
Based on what I know of these model architectures, compute costs should scale linearly with the number of requests (or more precisely, the number of batches since TPUs will process requests in parallel)
There could be other issues regarding concurrency, latency, congestion, etc. Or maybe there are other physical limitations regarding hardware. But just on the model itself I don't see why it should super-linear in the number of requests. If I'm wrong I'd be happy to know it though.
reply
This is a quote from the blog I was thinking of:
In a widely-read 2020 paper, OpenAI reported that the accuracy of its language models scaled “as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude.”
reply
Thanks. This still seems to be mostly talking about fixed training costs though.
I can't figure out why it's so expensive to run the models once they're created unless they're massive and irreducible ... which they probably are, but I haven't found a written account of that.
reply
... models using the customer's context
I'm pretty sure I saw this discussed in a recent blog post.
Unfortunately didn't save it, but I'll see if I can dig it up again.
reply
Is anyone a little more familiar with how these modern models work?
Yes. It will end up in communism. I know you guys don't believe me. But is the plain truth and you are still in denial.
reply
I'm not in denial. Poor AI stewardship will likely lead to huge wealth gaps and when that happens people tend to vote themselves into forms of communism. It's also the ultimate surveillance tool.
Hating AI doesn't stop AI though. Just like hating CBDCs doesn't stop CBDCs. Bitcoin is the only thing that might stop CBDCs. Similarly, the only thing that will stop AI induced communism is a technological rival that's open and free.
You don't show up with a knife to a gun fight. You show up with a bulletproof vest and a better gun.
reply
It doesn't matter if is open or closed source. Is just the way people will use it.
I am not against AU/robotics, I am not a caveman coming from the cave right now. I just want that AI/robotics wil be used ONLY on tasks that could replace hard work of humans (mines, digging holes, asteroids etc)... and let humans to do the creativity work and thinking.
I will be happy if an AI/robot will build my citadel. But I want to build it myself to show to all fucking shit AI that humans can build things too, proof of work.
This wasn't built by a shitGPT... but by my own hands, human hands.
Nowadays we are seeing even that even for an meaningless fried eggs recipe, people are asking shitGPT (open or closed source they do not care) how to do it.
This is the world I see it, not far from now, if we continue to use this shit.
Let's talk again in 5 years. If you will still be here... and not replaced by some kind of shit AI bot.
reply
I see what you mean now. AI could definitely rob us of purpose because that's exactly what it's designed to do. How do we fight that though?
reply
How do we fight that though?
I suggest you to watch that old movie, sorry documentary "Idiocracy". The answer is there.
reply