pull down to refresh

100 sats \ 2 replies \ @optimism 5h
Still going through it... it's nice.
Thus far, I think the assertion that boundaries are inherent to the frameworks themselves rather than something that can be overcome by just taxpayer funding Sam with more datacenter stimmies, is rather compelling. I'm thinking that fans of Arrow's Impossibility Theorem like the way this is reasoned (though lotsa maffs...)
reply
100 sats \ 0 replies \ @carter OP 1h
i feel like there is a lot of hand waving going on with his formal symbols. A few criticisms I would make:
  • Embodiment might be a requirement (could be virtual) but AGI needs to experience time so it can plan through it.
  • Just because something doesn't exist doesn't stop you from approximating it (Q-function learning).
  • You aren't in a bubble as time passes you get more inputs from the outside world and not just visual or audio data but information from the other agents in the society. Like maybe we learn how to be conscious from the other conscious people around us and we all just get better at approximating until everyone cant tell the difference but your really just replaying remixes of all the previous people you have ever seen. This would actually be a solution to his problem where he seems to say AI cant learn new types. I feel like hes alluding to the like introduction rules of the typed lambda calculus but if it has data from the real world (that has structure even if its random) and it can learn that new type from the data or other people the same way a person would.
  • I also get vibes of like zeno's paradox where the algorithm is caught in an infinite regress because to consider one action to must consider all the possible outcomes but then it must consider all the outcomes of the outcomes. Practically a good AGI would have analogs to something like emotion and get impatient and just decide. Like if an llm isn't coming up with valid solutions it could turn its temperature up and start getting more erratic actions to take this might not solve the problem but it breaks the loop it was caught in and could be seen akin to a person loosing their temper and exploding. They had no solution to their problem as a calm individual so they changed the conditions and they solved the problem no matter how much they regret it later when rationally recollecting on the situation
  • also more generally llm seem to be universal function approximators so I feel like whatever humans are doing can be created by stacking a bunch of layers of them together just like a fourier series that can approximate anything with a bunch of sin waves
reply
100 sats \ 0 replies \ @carter OP 4h
Im reading that next
reply