pull down to refresh
10 sats \ 5 replies \ @bataroot 15 Aug 2023 \ on: Setup your own private chatGPT tech
Has anyone here experimented and documented the specs of a home AI standalone machine setup?
Following what Guy Swann is doing here, but that would be great to have recommendations:
https://snort.social/e/nevent1qqswtzhnzm995h69t2mwnmte9ntflknrsdc4shcrje266ec5s7026xcpzpmhxue69uhkummnw3ezuamfdejsyg9euaj5dwsxg4hdxqweu54uf8ay3ec2d0ezs2l85xh899rkzgprmspsgqqqqqqs3rs8np
It really depends on what you want to achieve.
A reasonable laptop can run this GPT4ALL and StableDiffusion quite well.
Probably the best deal would be a mac mini with at least an M1 chip, those are extremely powerful machines, and their are silent. They do run both GPT and SD without any issues.
reply
I've been considering an M1 Mac mini as an AI workhorse. The minis in particular are unusually cost-efficient for an apple machine
reply
Apple silicon is a game changer.
reply
How quick are stable diffusion and gpt with M1, in your experience? I tried running a gpt-like on some regular x86 desktop hardware and it was horrible, basically unusable.
reply
SD, particularly from here takes a few seconds, maybe a minute or two depending on other load (I'm usually doing other stuff there) to generate a full image on an M1 mac mini. GPT responses following this particular setup should generate a response in a few seconds.
There's a lot of tweaking that can be made, in particular, if you have a GPU, you can configure a llama model to be run in parallel on the GPU.
reply