@anon
sign up
@anon
sign up
pull down to refresh
Sampling and structured outputs in LLMs
parthsareen.com/blog.html#sampling.md
157 sats
\
0 comments
\
@carter
23 Sep
AI
related
Deep Dive into LLMs like ChatGPT
www.youtube.com/watch?v=7xTGNNLPyMI
98 sats
\
1 comment
\
@kepford
6 May
AI
Here’s how I use LLMs to help me write code -- Simon Willison
simonwillison.net/2025/Mar/11/using-llms-for-code/
520 sats
\
0 comments
\
@StillStackinAfterAllTheseYears
12 Mar
tech
How LLMs Work, Explained Without Math
blog.miguelgrinberg.com/post/how-llms-work-explained-without-math
117 sats
\
2 comments
\
@398ja
8 May 2024
BooksAndArticles
Lessons learned from programming with LLMs
crawshaw.io/blog/programming-with-llms
120 sats
\
1 comment
\
@m0wer
5 Jul
AI
What We Know About LLMs (A Primer)
willthompson.name/what-we-know-about-llms-primer
163 sats
\
1 comment
\
@hn
25 Jul 2023
tech
LLMs in Programming
www.thecodedmessage.com/posts/llm-in-programming
167 sats
\
0 comments
\
@kehiy
11 Aug
AI
From Artificial Needles to Real Haystacks: Improving Capabilities in LLMs
arxiv.org/abs/2406.19292
21 sats
\
0 comments
\
@Rsync25
29 Jun 2024
alter_native
DBRX: A new open LLM
www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm
10 sats
\
1 comment
\
@hn
31 Mar 2024
tech
Building LLMs from the Ground Up: A 3-hour Coding Workshop
magazine.sebastianraschka.com/p/building-llms-from-the-ground-up
55 sats
\
0 comments
\
@Rsync25
31 Aug 2024
tech
LLM in a Flash: Efficient LLM Inference with Limited Memory
huggingface.co/papers/2312.11514
13 sats
\
1 comment
\
@hn
20 Dec 2023
tech
OpenCoder: Open-Source LLM for Coding
arxiv.org/abs/2411.04905
52 sats
\
0 comments
\
@hn
9 Nov 2024
tech
LiveBench - A Challenging, Contamination-Free LLM Benchmark
livebench.ai
161 sats
\
0 comments
\
@supratic
17 Jul
AI
Are LLMs random?
rnikhil.com/2025/04/26/llm-coin-toss-odd-even
269 sats
\
1 comment
\
@carter
30 Apr
AI
Hardware Acceleration of LLMs: A comprehensive survey and comparison
arxiv.org/abs/2409.03384
21 sats
\
0 comments
\
@hn
7 Sep 2024
tech
Compiling LLMs into a MegaKernel: A path to low-latency inference
zhihaojia.medium.com/compiling-llms-into-a-megakernel-a-path-to-low-latency-inference-cf7840913c17
10 sats
\
0 comments
\
@hn
19 Jun
tech
LLMs generate slop because they avoid surprises by design - Dan Fabulich
danfabulich.medium.com/llms-tell-bad-jokes-because-they-avoid-surprises-7f111aac4f96
343 sats
\
2 comments
\
@Scoresby
19 Aug
AI
Things we learned about LLMs in 2024
simonwillison.net/2024/Dec/31/llms-in-2024/
370 sats
\
0 comments
\
@Rsync25
31 Dec 2024
tech
Apple just released an interesting diffusion based coding language model
9to5mac.com/2025/07/04/apple-just-released-a-weirdly-interesting-coding-language-model/
131 sats
\
1 comment
\
@carter
8 Jul
AI
LLMs aren’t world models
yosefk.com/blog/llms-arent-world-models.html
121 sats
\
0 comments
\
@carter
13 Aug
AI
Compute Where It Counts: High Quality Sparsely Activated LLMs
crystalai.org/blog/2025-08-18-compute-where-it-counts
100 sats
\
0 comments
\
@carter
21 Aug
AI
LLM Visualization
bbycroft.net/llm
202 sats
\
7 comments
\
@hn
3 Dec 2023
tech
more