@anon
sign up
@anon
sign up
pull down to refresh
Meet PowerInfer: A Fast LLM on a Single Consumer-Grade GPU
www.marktechpost.com/2023/12/23/meet-powerinfer-a-fast-large-language-model-llm-on-a-single-consumer-grade-gpu-that-speeds-up-machine-learning-model-inference-by-11-times/
10 sats
\
2 comments
\
@ch0k1
24 Dec 2023
AI
related
Nvidia Shows Off GPU for Ultra-Long Context Models
developer.nvidia.com/blog/nvidia-rubin-cpx-accelerates-inference-performance-and-efficiency-for-1m-token-context-workloads/
157 sats
\
1 comment
\
@lunin
14 Sep
AI
A New RISC-V Breakthrough Chip Merges CPU, GPU & AI into One - techovedas
techovedas.com/a-new-risc-v-breakthrough-chip-merges-cpu-gpu-ai-into-one/
78 sats
\
0 comments
\
@ch0k1
6 Apr 2024
tech
AMD unveils powerful new AI chip to challenge Nvidia
arstechnica.com/ai/2024/10/amd-unveils-powerful-new-ai-chip-to-challenge-nvidia/
41 sats
\
0 comments
\
@ch0k1
11 Oct 2024
news
AMD Ryzen™ AI, the world's most powerful built-in AI processor in laptops!
31 sats
\
1 comment
\
@ama
7 Jan
tech
Hardware Acceleration of LLMs: A comprehensive survey and comparison
arxiv.org/abs/2409.03384
21 sats
\
0 comments
\
@hn
7 Sep 2024
tech
1-Bit LLM: The Most Efficient LLM Possible?
www.youtube.com/watch?v=7hMoz9q4zv0
533 sats
\
1 comment
\
@carter
24 Jun
AI
LLM in a Flash: Efficient LLM Inference with Limited Memory
huggingface.co/papers/2312.11514
13 sats
\
1 comment
\
@hn
20 Dec 2023
tech
Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM
www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm
306 sats
\
1 comment
\
@nullama
13 Apr 2023
bitcoin
AMD's MI300X Outperforms Nvidia's H100 for LLM Inference
www.blog.tensorwave.com/amds-mi300x-outperforms-nvidias-h100-for-llm-inference/
202 sats
\
0 comments
\
@hn
13 Jun 2024
tech
LiveBench - A Challenging, Contamination-Free LLM Benchmark
livebench.ai
161 sats
\
0 comments
\
@supratic
17 Jul
AI
Apple collaborates with NVIDIA to research faster LLM performance - 9to5Mac
9to5mac.com/2024/12/18/apple-collaborates-with-nvidia-to-research-faster-llm-performance/
14 sats
\
1 comment
\
@Rsync25
19 Dec 2024
tech
Compiling LLMs into a MegaKernel: A path to low-latency inference
zhihaojia.medium.com/compiling-llms-into-a-megakernel-a-path-to-low-latency-inference-cf7840913c17
10 sats
\
0 comments
\
@hn
19 Jun
tech
Lm.rs: Minimal CPU LLM inference in Rust with no dependency
github.com/samuel-vitorino/lm.rs
10 sats
\
0 comments
\
@hn
11 Oct 2024
tech
NVIDIA: Transforming LLM Alignment with Efficient Reinforcement Learning
www.marktechpost.com/2024/05/05/nvidia-ai-open-sources-nemo-aligner-transforming-large-language-model-alignment-with-efficient-reinforcement-learning/
20 sats
\
0 comments
\
@ch0k1
7 May 2024
tech
Bend: a high-level language that runs on GPUs (via HVM2)
github.com/HigherOrderCO/Bend
51 sats
\
0 comments
\
@hn
17 May 2024
tech
Don't Overthink It: A Survey of Efficient R1-style LRMs
arxiv.org/abs/2508.02120
132 sats
\
2 comments
\
@optimism
10 Aug
AI
Gemma3 – The current strongest model that fits on a single GPU
ollama.com/library/gemma3
46 sats
\
0 comments
\
@hn
12 Mar
tech
LLaMA-Factory: Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
github.com/hiyouga/LLaMA-Factory
157 sats
\
0 comments
\
@carter
19 Sep
AI
Minimal implementation of Mamba, the new LLM architecture, in 1 file of PyTorch
github.com/johnma2006/mamba-minimal
15 sats
\
1 comment
\
@hn
20 Dec 2023
tech
DBRX: A new open LLM
www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm
10 sats
\
1 comment
\
@hn
31 Mar 2024
tech
LLM evaluation at scale with the NeurIPS Efficiency Challenge
blog.mozilla.ai/exploring-llm-evaluation-at-scale-with-the-neurips-large-language-model-efficiency-challenge/
110 sats
\
0 comments
\
@localhost
22 Feb 2024
tech
more