pull down to refresh

I was listening to the No Agenda podcast a few weeks ago and they played a clip from a programmer who goes by the handle the Primagen. He's an entertaining guy. He actually has a lot of good takes on programming topics.
He started talking about Artificial general intelligence. His focus was Sam Altman's recent comments about AGI being just around the corner. The Primagen brought up a point I hadn't heard anyone mention.
Why would a company create an artificial general intelligence that can create apps and entire infrastructures from a prompt and just sell access to it?
Why wouldn't they keep it a trade secret and just use it themselves? If you believed in your AGI wouldn't that make more sense? So does Altman really believe what he is pitching or is it typical fiat hype? A company like Altman's could use AGI to basically destroy all the competition and take over the tech world? Could they not? Its a big if, but If they create an AGI that can just create things it thinks up why would you want to sell that as a service instead of just owning all the amazing things it creates?
Here's what I think. The AGI Altman is talking about is mostly hype. It will not be able to create the next Facebook or Google. It might be able to make some crappy copies of apps at best. Look at the LLM tools we have today like ChatGPT. They can mimic patterns. They can copy writing styles that were created by humans but the majority of what they produce is pretty easy to spot as AI crap. At least it has been for me up to this point. It tends to function better the more a writing style has consistent patterns. True innovation breaks from patterns in interesting ways but builds upon them.
Many people are just blown away by ChatGPT. I find these people tend to be anything but critical thinkers. They also tend to have little experience with software engineering. I'm impressed with many of these tools but the more I understand how they work the less magical it becomes and the more I see the OZ pulling the strings.
All that said, I do not believe AGI is not right around the corner. That's a myth. I've been hearing that for the last 10 years. It's always right around the corner. And yet, people seem to still fall for this. Sam Altman is a pitchmen. Pretty much any founder in Silicon Valley is as well. You really have to take everything they say with a grain of salt because they are all pitchmen, they are marketing their products trying to get fiat funding. They are trying to get eyeballs they can sell.
They're always going to paint everything in the most rosy light. And that's really what you have to understand. You have to be skeptical, because most of the time you're being hyped. I've been around technology and worked in tech field long enough to see the patterns repeat.
I'm not saying there won't be improvements to "AI". I'm not saying that it hasn't already improved. It's useful, but it is not a general intelligence, and it is not going to take the jobs and engineers any time soon. I hesitate to even say any time soon as I doubt it will ever take engineer's jobs because there is always going to be a need for people that know how things work. No matter how advanced artificial intelligence becomes, it is not actually intelligence. It is at best an algorithm that mimics human patterns. There is no creativity. There is only human creativity. There is our human creativity being used as the seed. But humans are weird. We're organic. We're not machines. As patterns form someone breaks out of the pattern in a new and interesting way.
I'd love to hear counter arguments. Why would it make more sense to sell access to AGI vs. using it for yourself?
Couple reasons come to mind.
  1. It's not only about the money. At the beginning of all this there was a lot of talk about how AGI would be a giant Good for humankind, and building it was a holy quest. Probably nobody believes that's really true, but is it more than 0% true?
  2. Regardless of the truth value of #1, building AGI depends on getting a shit-ton of funding, which means you have to convince people with money that you're building AGI to have a shot, and the message of "We will build AGI and then destroy all other companies, Terminator-style, with our competitive advantage" seems not super investable.
No matter how advanced artificial intelligence becomes, it is not actually intelligence.
Strong disagree on that one, but it doesn't affect my answer.
reply
Those are indeed good reasons.
reply
153 sats \ 0 replies \ @oklar 7 Jan
So, I gather that 'agentic' use cases is the term for coding and less content related prompting use, and doing that well, is where we are moving to.
Instead of employing a creative marketing and digital infrastructure team to conceptualize and brand your ideas, you will get that through these huge data centres AAS, removing the need for humans to interact with each other.
It reminds me of store signage in communist countries that have garish photorealistic imagery coupled with generic type. In terms of a product, you just get a rehashed version of something that has been done somewhere else. Like wordpress, blogspot was a fairly useful way to self-publish, I suppose there are limited advantages. We can now create our own apps and impliment limited programming ideas.
I suspect the value in larger technological marvels are not just the engineering side of them but also the physical inffrastructure and ancillary services that support them.
To answer your question, like Prusa or some 3D printing company created a product that allows you to build more precise and useful models than you could with Lego bricks, there's a first mover advantage into being able to monitize and sell your own data back to you AAS.
The commodification of everything will only be completed when all human information has a digital doppelgänger, and we squeeze out the last remaining utility of it.
My question would be, do we make that data open data or not?
reply
Why is named "intelligent" when in fact it is not? To be intelligent you need consciousness... and also you need to be able to choose between right and wrong, so you need morality. Can a machine have morality? And especially can have morality for you, the one that is asking the machine to do something?
reply
That's a great question. To be honest, AI isn't even as smart or quick thinking as a mouse right now. They call it "intelligence" because it's on its way to becoming more advanced. Currently, it's called weak AI, but the goal is AGI (Artificial General Intelligence) and then ASI (Artificial Superintelligence). Once ASI is achieved, no one knows for sure what will happen. It's believed that a "singularity" point will be reached where humans create a god like intelligence capable of self improvement and replication (It's scary 😂).
reply
They literally feed your own data and also sell it to you back, saying that is "intelligence"... IT IS NOT. And stop feeding their machines. Don't give them more data.
reply
You won’t believe this, but when Meta launched llama AI, I asked it some questions, and somehow it pulled up the entire biodata of a random real person. I was shocked Meta must have trained Llama on highly personal data. So yes, I agree, they definitely use your data.
reply
LLM really are using advanced algos that do statistical analysis and probability work. It's math. It requires massive amounts of data and energy to do this. It's not intelligence. It's not thinking as we understand it no more than a calculator is thinking. This is the problem with the name AI. It's sets up a false idea of what these machines actually are doing.
reply
That's bullshit. That super AI crap. Don't fall for it.
reply
ASI is real just a vision slightly out of focus, as we haven’t even fully envisioned how AGI will function.
reply
You have to understand that all of this has been talked about since the 60s. You also have to understand the Silicon Valley is a fiat fueled hype machine. There are so many wild things talked about that are pretty much impossible but promoted to get dollars.
Talk is cheap. They made LLMs but they really aren't AIs at all.
reply
Talk is cheap. They made LLMs but they really aren't AIs at all.
What AI means to you? How do you imagine AI in your mind? What kind of AI would you want, and what amazing things would you like it to be able to do?
reply
what amazing things would you like it to be able to do?
Why not thinking of people doing amazing things? Why have to be the stupid machines? machines must be in the mines or manipulating radioactive rods or heavy lifting shit, instead of people.
reply
See, I am not saying replace humans at all; after all, we have a force of 8 billion people. But if we see the future, we need advanced technology to help humanity make more progress in science and technology, find cures for diseases, and solve economic problems so that civilization doesn't collapse. ChatGPT is not what humanity needs right now, but an advanced, conscious like being is needed to assist us because we have reached a point where one wrong move could put the world on fire. Maybe all this I said is bullshit, but it's just my opinion. The world has already created a great technology called Bitcoin, but it still needs more.
Its part of the parlor trick. Machines have long been able to lift more than humans or move faster, or work longer but we don't humanize them.
These algos will never have consciousness.
reply
As I said long time before here #171338: this is all bullshit pushing the world into a fucking communist society, where only the "AI" will have the last word... and people will not know anymore what is right and wrong.
machines are good, but only to do the hard and dangerous work where for people can be dangerous. Not to paint and write poems for me and give me answers from books.
reply
Yep. I don't disagree.
reply
53 sats \ 1 reply \ @galt 23h
Excellent point, it's like selling investing courses or trading tips, if those are so good why not getting rich with them instead of selling them to others? So yeah, at this stage AGI is probably hype to sell to deep pockets
reply
44 sats \ 0 replies \ @nym 22h
Exactly. Just like BitMain
reply
It made me think of this post I recently read from Dan Koe -
He basically agrees with you.
reply
You know, what he says about being a creator is pretty right on. I would say it doesn't have to be that specific though.
Many years ago I read Linchpin: Are You Indispensable? by Seth Godin. It should be read by anyone entering their working years. Probably more relevant than ever.
The thing is, whether its machines or cheap labor we all are competing for work. Godin was the first person I heard explain how government schools are designed to create conformity and limit creativity. Reading this book opened by eyes to a new way of viewing work.
Using emotional energy in ANY job whether you are cleaning tables in a coffee shop, writing code, or fixing plumbing issues. Being human and caring are things that are hard to find because it takes work.
Doing good work in and of itself is rewarding. Phone it in is not. Great book.
reply
39 sats \ 1 reply \ @TresDMan 14h
I have a couple of Seth's books in my list to read, this one might just be on there. Maybe I'll read this one first.
reply
43 sats \ 0 replies \ @TresDMan 13h
Tried to zap you, but it's telling me it failed multiple times.
I'm thinking it's because I don't have enough CCs to fulfill my default zap amount, even though I do have more than enough sats just sitting there. I thought SN might just take the rest from there.
I think it's finally time to figure out how to full connect my own node. I have Alby Hub all set up, just haven't connected my own node yet.
reply
Do you know what episode that is from?
reply
Sorry, I don't recall. I think it was in December of 2024. You could find Prime on YouTube as well by searching for his AI videos.
reply
I think it makes sense to sell access to it. Because then you not only get to monetize your own ideas, you get to monetize other people's ideas too (they pay you for the infra to operationalize their ideas)
That being said, I don't see how any of this is AGI. Maybe the G changed from "general" to "generative."
reply
That's the point. With AGI my understanding is that IT will come up with the ideas not the "user". Its not generative but general.
The hype around it makes one think with AGI the AGI could just create another AGI and people could use that themselves. The logic falls off the rails pretty quick.
reply
That's what I mean. It's probably not actually "general". Whatever that even means.
And even if it's able to come up with its own ideas, that doesn't mean all the ideas will be good, or that humans can't come up with other ideas too.
reply
Yep, we agree. They say it is general but I don't believe they believe that. What Altman is talking about is generative but he's just not pitching it that way. The fact that they keep hyping it is to me evidence it is not a general intelligence.
reply