pull down to refresh
0 sats \ 1 reply \ @gd 25 Dec 2023 \ on: Meet PowerInfer: A Fast LLM on a Single Consumer-Grade GPU AI
Private local LLMs are the direction I’m heading, though I still believe that we will move towards more powerful smaller models
I don't believe chip manufacturers and their technology are prepared to work on such scales
reply