pull down to refresh
102 sats \ 2 replies \ @optimism 20h \ parent \ on: Reminder: you don't control the records of your chats with LLMs AI
Here's what my local llama3.2 says about the masturbation example:
Pretty solid advice. I've been reluctant to try llama on my creaky old thinkpad, but maybe this is a good reason to upgrade.
reply
You need something with a great GPU and a ton of memory. 3 of my friends that are on a budget (like a proper stacker) run mac minis and just screenshare/ssh into it (easier if you also have a macbook, but doable on an intel/windows PC too) on their laptops and use it like a little server. Because of Unified Memory Architecture, as long as you don't run a million things on it concurrently, you get a ton of bang for your buck on these, even with the more expensive versions.
PS: I'm interested to learn what people on MS/Intel platforms use though - I only have some Xeon servers nowadays, all my hardware is ARM and I'm testing RISC-V: no more intel workstations for me, except old laptops for compatibility testing, which I only do once every 6 months at most.
reply