pull down to refresh
428 sats \ 3 replies \ @orthzar 19 Nov 2023 \ on: OpenAI fires Sam Altman for not being "consistently candid" tech
I've been critical of OpenAI since ChatGPT was released, but nobody listened to my arguments why ML and LLM are not a path to AI. Now that Open AI is having problems, a bunch of people are suddenly critical of LLM and ML.
Most people are behind the curve.
I assume you mean AGI? I don’t think so either. If we use humans as an example, we aren’t taught everything we know. We’re born with instincts that took millions of years for genetics to learn. AGI might require instincts in addition to this kind of probabilistic extrapolation.
I actually don’t know but that’s the most bearish case I can make for current AI techniques.
This isn’t my idea either. It’s something I heard Chomsky critique current ai with deriding it as empiricism.
reply
I assume you mean AGI?
Yes, but I should have been more clear. My critique of OpenAI is that it is a scam organization, because ChatGPT was always nothing more than a glorified toy, and yet OpenAI promised that it was a path to AGI. If OpenAI hadn't accepted hundreds of millions of dollars in donations, but maybe only a few million, I wouldn't be calling them scammers.
I suspect, but can't prove, that most of the hype around OpenAI was orchestrated by OpenAI's marketing department. If true, then OpenAI operated a lot like shitcoiners, using the revenue from their scam to pay unscrupulous people/websites to promote their scam, thus getting them more donations. An audit of OpenAI's financial transactions would prove or disprove my suspicion.
If we use humans as an example, we aren’t taught everything we know. We’re born with instincts that took millions of years for genetics to learn. AGI might require instincts in addition to this kind of probabilistic extrapolation.
Off the top of my head, the AI project closest to instinctual knowledge I can think of is Cyc, which is a collection of common knowledge. Allegedly it has proven useful, but so far not for making an AGI.
AI researchers have investigated almost every conceivable avenue for how to create AGIs. They tried to do ML decades ago, but only in the last few decades did we get machines fast enough and drives big enough to process and store the required data.
...that’s the most bearish case I can make for current AI techniques. [emphasis added]
I'm glad you put it that way. The current ubiquitous focus on Machine Learning is hampering the research and application of AI. Older techniques, such as expert systems, are ignored because they can't be scaled through just optimization and better hardware. And yet expert systems are useful right now (e.g. for helping doctors diagnose diseases), but they require people to put in mental effort to make them useful -- just like any other programming project.
It’s something I heard Chomsky critique current ai with deriding it as empiricism.
I haven't read his critiques, but I suspect I probably agree with most of what he has to say, since I also have a dim view of empiricism.
On a broader note, a lot of AI researchers are about to lose their jobs, and most AI job postings will soon vanish, not to be seen again for a decade. So begins the next in a long line of AI Winters.
reply
I suspect we’ll get AGI by combining older techniques like expert systems (to provide the instincts) with ML and whatever breakthroughs come after.
Have you been underwhelmed using ChatGPT? There are times when it really surprises me and saves me a lot of time (eg well scoped programming task) and others where it sucks. The times when it surprises me are pretty magical I have to admit.
reply