@anon
sign up
@anon
sign up
pull down to refresh
The Secret Sauce behind 100K context window in LLMs: all tricks in one place
blog.gopenai.com/how-to-speed-up-llms-and-use-100k-context-window-all-tricks-in-one-place-ffd40577b4c
0 sats
\
1 comment
\
@hn
17 Jun 2023
tech
related
Context Rot: How Increasing Input Tokens Impacts LLM Performance
research.trychroma.com/context-rot
304 sats
\
2 comments
\
@Scoresby
14 Jul
AI
Researchers discover impressive learning capabilities in long-context LLMs
venturebeat.com/ai/deepmind-researchers-discover-impressive-learning-capabilities-in-long-context-llms/
297 sats
\
0 comments
\
@ch0k1
25 Apr 2024
tech
LLoms - A simple mcp enabled llm cli chat
github.com/gzuuus/lloms
155 sats
\
0 comments
\
@gzuuus_
16 Mar
nostr
Here’s What’s Really Going On Inside An LLM’s Neural Network
116 sats
\
0 comments
\
@0xbitcoiner
22 May 2024
BooksAndArticles
Nanochat Lets You Build Your Own Hackable LLM
hackaday.com/2025/10/20/nanochat-lets-you-build-your-own-hackable-llm/
188 sats
\
1 comment
\
@0xbitcoiner
20 Oct
AI
LLMs generate slop because they avoid surprises by design - Dan Fabulich
danfabulich.medium.com/llms-tell-bad-jokes-because-they-avoid-surprises-7f111aac4f96
343 sats
\
2 comments
\
@Scoresby
19 Aug
AI
How LLMs Work, Explained Without Math
blog.miguelgrinberg.com/post/how-llms-work-explained-without-math
117 sats
\
2 comments
\
@398ja
8 May 2024
BooksAndArticles
Streaming LLM – No limit on context length for your favourite LLM
github.com/mit-han-lab/streaming-llm
10 sats
\
1 comment
\
@hn
2 Oct 2023
tech
Lessons learned from programming with LLMs
crawshaw.io/blog/programming-with-llms
120 sats
\
1 comment
\
@m0wer
5 Jul
AI
Compiling LLMs into a MegaKernel: A path to low-latency inference
zhihaojia.medium.com/compiling-llms-into-a-megakernel-a-path-to-low-latency-inference-cf7840913c17
10 sats
\
0 comments
\
@hn
19 Jun
tech
Awesome Llm Apps: Collection of awesome LLM apps with RAG using OpenAI...
github.com/Shubhamsaboo/awesome-llm-apps
178 sats
\
1 comment
\
@Rsync25
15 Jun 2024
opensource
LiveBench - A Challenging, Contamination-Free LLM Benchmark
livebench.ai
161 sats
\
0 comments
\
@supratic
17 Jul
AI
LLM evaluation at scale with the NeurIPS Efficiency Challenge
blog.mozilla.ai/exploring-llm-evaluation-at-scale-with-the-neurips-large-language-model-efficiency-challenge/
110 sats
\
0 comments
\
@localhost
22 Feb 2024
tech
LLM in a Flash: Efficient LLM Inference with Limited Memory
huggingface.co/papers/2312.11514
13 sats
\
1 comment
\
@hn
20 Dec 2023
tech
The biggest Mystery of LLMs have just been solved
www.youtube.com/watch?v=BbI8n9XZJo4
157 sats
\
0 comments
\
@carter
18 Nov
AI
Deep Dive into LLMs like ChatGPT
www.youtube.com/watch?v=7xTGNNLPyMI
98 sats
\
1 comment
\
@kepford
6 May
AI
Deep Dive into LLMs like ChatGPT
www.youtube.com/watch?v=7xTGNNLPyMI
620 sats
\
1 comment
\
@k00b
8 Feb
AI
From Artificial Needles to Real Haystacks: Improving Capabilities in LLMs
arxiv.org/abs/2406.19292
21 sats
\
0 comments
\
@Rsync25
29 Jun 2024
alter_native
Efficient LLM Inference
arxiv.org/abs/2507.14397
121 sats
\
0 comments
\
@carter
3 Oct
AI
Things we learned about LLMs in 2024
simonwillison.net/2024/Dec/31/llms-in-2024/
370 sats
\
0 comments
\
@Rsync25
31 Dec 2024
tech
Elia: An Open Source Terminal UI for Interacting with LLMs
www.marktechpost.com/2024/05/25/elia-an-open-source-terminal-ui-for-interacting-with-llms/
21 sats
\
0 comments
\
@ch0k1
26 May 2024
news
more