pull down to refresh

Inspired by the success of Large Language Models (LLMs), NotaGen adopts pre-training, fine-tuning, and reinforcement learning paradigms (henceforth referred to as the LLM training paradigms). It is pre-trained on 1.6M pieces of music, and then fine-tuned on approximately 9K high-quality classical compositions conditioned on ``period-composer-instrumentation'' prompts.
A performance of one of the composed pieces, Waltz in F-sharp Minor: