@anon
sign up
@anon
sign up
Consistency LLM: converting LLMs to parallel decoders accelerates inference 3.5x
hao-ai-lab.github.io/blogs/cllm/
0 sats
\
0 comments
\
@hn
8 May
tech
This link was posted by
zhisbug
2 hours ago on
HN
. It received 167 points and 19 comments.
write
preview
reply
100 sats
related posts
view all related items