pull down to refresh
https://medium.com/@isaiah_bjork/deploying-llama-3-1-405b-a-step-by-step-guide-9b1b852f3dc9
"with some optimization, we can run it on 192 gigabytes using 8x4090 GPUs"
Still not a home solution but closer.
reply
pull down to refresh
https://medium.com/@isaiah_bjork/deploying-llama-3-1-405b-a-step-by-step-guide-9b1b852f3dc9
"with some optimization, we can run it on 192 gigabytes using 8x4090 GPUs"
Still not a home solution but closer.
Eh, not really a home solution. To run Llama 3.1 405B which requires >800GB of RAM you still need devices at home totalling more than this (over 10 devices with 64 GB for example). Not exactly something people are gonna have "at home"