@anon
sign up
@anon
sign up
pull down to refresh
cocktailpeanut/dalai: The simplest way to run LLaMA on your local machine
github.com/cocktailpeanut/dalai
247 sats
\
0 comments
\
@random_
24 Mar 2023
bitcoin
related
LLoms - A simple mcp enabled llm cli chat
github.com/gzuuus/lloms
155 sats
\
0 comments
\
@gzuuus_
16 Mar
nostr
Meta’s Llama Firewall Bypassed Using Prompt Injection Vulnerability
cybersecuritynews.com/metas-llama-firewall/
21 sats
\
0 comments
\
@ch0k1
14 Jul
security
Tip of the day: how to run your own LNbits in 10 min
639 sats
\
30 comments
\
@DarthCoin
11 Jan 2023
bitcoin
The Best Way of Running GPT-OSS Locally - KDnuggets
www.kdnuggets.com/the-best-way-of-running-gpt-oss-locally
118 sats
\
0 comments
\
@optimism
25 Aug
AI
Introducing self-hosted LlamaGPT on umbrelOS✨
29 sats
\
1 comment
\
@AR0w
16 Aug 2023
bitcoin
How to Run Llama 3.1 405B on Home Devices? Build AI Cluster!
b4rtaz.medium.com/how-to-run-llama-3-405b-on-home-devices-build-ai-cluster-ad0d5ad3473b
116 sats
\
3 comments
\
@Rsync25
29 Jul 2024
alter_native
Experimenting with local LLMs on macOS
blog.6nok.org/experimenting-with-local-llms-on-macos/
150 sats
\
0 comments
\
@carter
8 Sep
AI
How Is LLaMa.cpp Possible?
finbarr.ca/how-is-llama-cpp-possible/
16 sats
\
2 comments
\
@hn
15 Aug 2023
tech
Episode 145: Going Local
20 sats
\
1 comment
\
@AtlantisPleb
14 Dec 2024
openagents
Everything I've learned so far about running local LLMs
nullprogram.com/blog/2024/11/10/
141 sats
\
0 comments
\
@co574
10 Nov 2024
tech
LM Studio - Discover, download, and run local LLMs
lmstudio.ai/
148 sats
\
1 comment
\
@k00b
16 Mar
AI
Running LLMs Locally on AMD GPUs with Ollama
community.amd.com/t5/ai/running-llms-locally-on-amd-gpus-with-ollama/ba-p/713266
10 sats
\
0 comments
\
@Rsync25
27 Sep 2024
tech
Run LLMs on my own Mac fast and efficient Only 2 MBs
www.secondstate.io/articles/fast-llm-inference/
13 sats
\
1 comment
\
@hn
13 Nov 2023
tech
Elia: An Open Source Terminal UI for Interacting with LLMs
www.marktechpost.com/2024/05/25/elia-an-open-source-terminal-ui-for-interacting-with-llms/
21 sats
\
0 comments
\
@ch0k1
26 May 2024
news
LM Studio - Experiment with local LLMs
lmstudio.ai/
274 sats
\
0 comments
\
@Rsync25
12 Nov 2024
tech
Making my local LLM voice assistant faster and more scalable with RAG
johnthenerd.com/blog/faster-local-llm-assistant/
52 sats
\
5 comments
\
@hn
15 Jun 2024
tech
LLaMA-Factory: Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
github.com/hiyouga/LLaMA-Factory
157 sats
\
0 comments
\
@carter
19 Sep
AI
Meta releases the biggest and best open-source AI model yet
www.theverge.com/2024/7/23/24204055/meta-ai-llama-3-1-open-source-assistant-openai-chatgpt
22 sats
\
2 comments
\
@ch0k1
23 Jul 2024
news
Meta's Llama AI was fed with pirated books from LibGen
archive.is/TefWS
267 sats
\
2 comments
\
@StillStackinAfterAllTheseYears
20 Mar
tech
Compiling LLMs into a MegaKernel: A path to low-latency inference
zhihaojia.medium.com/compiling-llms-into-a-megakernel-a-path-to-low-latency-inference-cf7840913c17
10 sats
\
0 comments
\
@hn
19 Jun
tech
Talk-Llama
github.com/ggerganov/whisper.cpp/tree/master/examples/talk-llama
20 sats
\
1 comment
\
@hn
2 Nov 2023
tech
more