Embedding models have become an important part of LLM applications, enabling tasks such as measuring ... However, embedding models are mostly based on a transformer architecture that is different from the one ... This makes it difficult to transfer the massive work being done on generative models to improve embedding
pull down to refresh
Seeing a few comments asking what can be done about data embedding with unspendable UTXOs (an issue which
This cost only grows as LLMs scale to larger embedding dimensions and context lengths.
I don’t recall claiming that getting rid of data embedding would require a hardfork. ... I do however think that effectively combating data embedding would require significant cuts to Bitcoin