Some of the more unhinged writing on superintelligence pictures AI doing things that seem like magic. Crossing air gaps to escape its data center. Building nanomachines from simple components. Plowing through physical bottlenecks to revolutionize the economy in months.More sober thinkers point out that these things might be physically impossible. You can’t do physically impossible things, even if you’re very smart.No, say the speculators, you don’t understand. Everything is physically impossible when you’re 800 IQ points too dumb to figure it out. A chimp might feel secure that humans couldn’t reach him if he climbed a tree; he could never predict arrows, ladders, chainsaws, or helicopters. What superintelligent strategies lie as far outside our solution set as “use a helicopter” is outside a chimp’s?Eh, say the sober people. Maybe chimp → human was a one-time gain. Humans aren’t infinitely intelligent. But we might have infinite imagination. We can’t build starships, but we can tell stories about them. If someone much smarter than us built a starship, it wouldn’t be an impossible, magical thing we could never predict. It would just be the sort of thing we’d expect someone much smarter than us to do. Maybe there’s nothing left in the helicopters-to-chimps bin - just a lot of starships that might or might not get built.The first time I felt like I was getting real evidence on this question - the first time I viscerally felt myself in the chimp’s world, staring at the helicopter - was last week, watching OpenAI’s o3 play GeoGuessr.
I don't think there's infinite imagination with a fixed level of intelligence. There's probably diminishing returns on imagination as intelligence grows, but we are nowhere near the asymptote (if there is one) imo.
ASI in this in GeoGuesser example shows how much faster intelligent machines can search through an imagination space that we share with them. I predict ASI will, at some point, imagine things that are as unimaginable and incomprehesible to us as a helicopter is to a chimp.