pull down to refresh

credit to: @gnukeith via X/Twitter
This is a guide on how to set up LLM's locally for Open WebUI and Brave Browser.
Why Run a Local LLM Instead of a Non-Local LLM?
Running a local LLM (Language Model) on your own hardware offers several advantages over using cloud-based or remote LLMs:
  1. Privacy and Security: Your data remains on your local machine, ensuring that sensitive information is not transmitted over the internet or stored on external servers.
  2. Control and Customization: You have full control over the models you use and can customize them to better suit your specific needs and preferences.
  3. Reduced Latency: Local LLMs provide faster response times since there is no need to communicate with remote servers, making interactions more efficient and seamless.
  4. Cost Efficiency: By using your own hardware, you can avoid ongoing subscription fees or usage costs associated with cloud-based LLM services.
  5. Offline Accessibility: Local LLMs can operate without an internet connection, making them useful in environments with limited or no internet access.
Overall, running a local LLM offers enhanced privacy, control, and efficiency, making it an attractive option for individuals and organizations with specific needs and resource availability.
———————————————————————
Before you continue:
You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
This guide will be in two sections:
first section is for Open WebUI and the second section is for Brave BYOM
this territory is moderated