@anon
sign up
@anon
sign up
pull down to refresh
TinyML: Ultra-low power Machine Learning
www.ikkaro.net/what-tinyml-is/
105 sats
\
1 comment
\
@hn
16 Jan 2024
tech
related
A New RISC-V Breakthrough Chip Merges CPU, GPU & AI into One - techovedas
techovedas.com/a-new-risc-v-breakthrough-chip-merges-cpu-gpu-ai-into-one/
78 sats
\
0 comments
\
@ch0k1
6 Apr 2024
tech
Google unveils AI small enough to run on a toaster
152 sats
\
0 comments
\
@lunin
15 Aug
AI
tinygrad: A simple and powerful neural network framework
tinygrad.org/
10 sats
\
1 comment
\
@premitive1
15 Aug 2023
tech
Researchers develop a novel ultra-low–power memory for neuromorphic computing
techxplore.com/news/2024-04-ultra-lowpower-memory-neuromorphic.html
227 sats
\
0 comments
\
@ch0k1
4 Apr 2024
science
Integer addition algorithm could reduce energy needs of AI by 95%
techxplore.com/news/2024-10-integer-addition-algorithm-energy-ai.html
111 sats
\
0 comments
\
@ch0k1
13 Oct 2024
news
Meet PowerInfer: A Fast LLM on a Single Consumer-Grade GPU
www.marktechpost.com/2023/12/23/meet-powerinfer-a-fast-large-language-model-llm-on-a-single-consumer-grade-gpu-that-speeds-up-machine-learning-model-inference-by-11-times/
10 sats
\
2 comments
\
@ch0k1
24 Dec 2023
AI
1-Bit LLM: The Most Efficient LLM Possible?
www.youtube.com/watch?v=7hMoz9q4zv0
533 sats
\
1 comment
\
@carter
24 Jun
AI
Compiling LLMs into a MegaKernel: A path to low-latency inference
zhihaojia.medium.com/compiling-llms-into-a-megakernel-a-path-to-low-latency-inference-cf7840913c17
10 sats
\
0 comments
\
@hn
19 Jun
tech
Hidet: A Deep Learning Compiler for Efficient Model Serving
pytorch.org/blog/introducing-hidet/
110 sats
\
1 comment
\
@hn
28 Apr 2023
tech
Deploying Ultralytics YOLO Models On Raspberry Pi Devices
www.raspberrypi.com/news/deploying-ultralytics-yolo-models-on-raspberry-pi-devices/
181 sats
\
0 comments
\
@0xbitcoiner
29 Nov 2024
DIY
DeepScaleR: Surpassing O1-Preview with a 1.5B Model by Scaling RL
pretty-radio-b75.notion.site/DeepScaleR-Surpassing-O1-Preview-with-a-1-5B-Model-by-Scaling-RL-19681902c1468005bed8ca303013a4e2
21 sats
\
0 comments
\
@hn
11 Feb
tech
Gemma 3n models: designed for efficient execution on everyday devices
ollama.com/library/gemma3n
124 sats
\
5 comments
\
@m0wer
6 Jul
AI
No More Floating Points, The Era of 1.58-bit Large Language Models
medium.com/ai-insights-cobet/no-more-floating-points-the-era-of-1-58-bit-large-language-models-b9805879ac0a
100 sats
\
1 comment
\
@0xbitcoiner
11 Mar 2024
science
freebie
Apple quietly released a framework for machine learning on Apple silicon
twitter.com/deliprao/status/1732250132614184970
131 sats
\
2 comments
\
@zuspotirko
6 Dec 2023
AI
‘Mind-blowing’ IBM chip speeds up AI
www.nature.com/articles/d41586-023-03267-0
41 sats
\
0 comments
\
@owleyedapprentice
23 Oct 2023
tech
Minimal implementation of Mamba, the new LLM architecture, in 1 file of PyTorch
github.com/johnma2006/mamba-minimal
15 sats
\
1 comment
\
@hn
20 Dec 2023
tech
AMD Ryzen™ AI, the world's most powerful built-in AI processor in laptops!
31 sats
\
1 comment
\
@ama
7 Jan
tech
OpenAI o3-mini model release
openai.com/index/openai-o3-mini/
196 sats
\
0 comments
\
@ch0k1
3 Feb
news
Compute Where It Counts: High Quality Sparsely Activated LLMs
crystalai.org/blog/2025-08-18-compute-where-it-counts
100 sats
\
0 comments
\
@carter
21 Aug
AI
LLM in a Flash: Efficient LLM Inference with Limited Memory
huggingface.co/papers/2312.11514
13 sats
\
1 comment
\
@hn
20 Dec 2023
tech
Microwatt: A tiny Open POWER ISA softcore written in VHDL 2008
github.com/antonblanchard/microwatt
31 sats
\
1 comment
\
@hn
21 Oct 2023
tech
more