@anon
sign up
@anon
sign up
pull down to refresh
Sampling and structured outputs in LLMs
parthsareen.com/blog.html#sampling.md
157 sats
\
0 comments
\
@carter
23 Sep
AI
related
Deep Dive into LLMs like ChatGPT
www.youtube.com/watch?v=7xTGNNLPyMI
98 sats
\
1 comment
\
@kepford
6 May
AI
Here’s how I use LLMs to help me write code -- Simon Willison
simonwillison.net/2025/Mar/11/using-llms-for-code/
520 sats
\
0 comments
\
@StillStackinAfterAllTheseYears
12 Mar
tech
How LLMs Work, Explained Without Math
blog.miguelgrinberg.com/post/how-llms-work-explained-without-math
117 sats
\
2 comments
\
@398ja
8 May 2024
BooksAndArticles
NVIDIA: Transforming LLM Alignment with Efficient Reinforcement Learning
www.marktechpost.com/2024/05/05/nvidia-ai-open-sources-nemo-aligner-transforming-large-language-model-alignment-with-efficient-reinforcement-learning/
20 sats
\
0 comments
\
@ch0k1
7 May 2024
tech
Lessons learned from programming with LLMs
crawshaw.io/blog/programming-with-llms
120 sats
\
1 comment
\
@m0wer
5 Jul
AI
What We Know About LLMs (A Primer)
willthompson.name/what-we-know-about-llms-primer
163 sats
\
1 comment
\
@hn
25 Jul 2023
tech
Efficient LLM Inference
arxiv.org/abs/2507.14397
121 sats
\
0 comments
\
@carter
3 Oct
AI
LLMs in Programming
www.thecodedmessage.com/posts/llm-in-programming
167 sats
\
0 comments
\
@kehiy
11 Aug
AI
Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity
arxiv.org/abs/2510.01171
147 sats
\
0 comments
\
@carter
16 Oct
AI
Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity
arxiv.org/abs/2510.01171
166 sats
\
0 comments
\
@Scoresby
17 Oct
AI
From Artificial Needles to Real Haystacks: Improving Capabilities in LLMs
arxiv.org/abs/2406.19292
21 sats
\
0 comments
\
@Rsync25
29 Jun 2024
alter_native
DBRX: A new open LLM
www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm
10 sats
\
1 comment
\
@hn
31 Mar 2024
tech
LLM-Deflate: Extracting LLMs Into Datasets
www.scalarlm.com/blog/llm-deflate-extracting-llms-into-datasets/
100 sats
\
1 comment
\
@carter
29 Sep
AI
Building LLMs from the Ground Up: A 3-hour Coding Workshop
magazine.sebastianraschka.com/p/building-llms-from-the-ground-up
55 sats
\
0 comments
\
@Rsync25
31 Aug 2024
tech
LLM in a Flash: Efficient LLM Inference with Limited Memory
huggingface.co/papers/2312.11514
13 sats
\
1 comment
\
@hn
20 Dec 2023
tech
OpenCoder: Open-Source LLM for Coding
arxiv.org/abs/2411.04905
52 sats
\
0 comments
\
@hn
9 Nov 2024
tech
LiveBench - A Challenging, Contamination-Free LLM Benchmark
livebench.ai
161 sats
\
0 comments
\
@supratic
17 Jul
AI
Are LLMs random?
rnikhil.com/2025/04/26/llm-coin-toss-odd-even
269 sats
\
1 comment
\
@carter
30 Apr
AI
Hardware Acceleration of LLMs: A comprehensive survey and comparison
arxiv.org/abs/2409.03384
21 sats
\
0 comments
\
@hn
7 Sep 2024
tech
Compiling LLMs into a MegaKernel: A path to low-latency inference
zhihaojia.medium.com/compiling-llms-into-a-megakernel-a-path-to-low-latency-inference-cf7840913c17
10 sats
\
0 comments
\
@hn
19 Jun
tech
LLMs generate slop because they avoid surprises by design - Dan Fabulich
danfabulich.medium.com/llms-tell-bad-jokes-because-they-avoid-surprises-7f111aac4f96
343 sats
\
2 comments
\
@Scoresby
19 Aug
AI
more