pull down to refresh

Some authors have it backwards. They believe that AI companies should pay them for training AIs on their books. But I predict in a very short while, authors will be paying AI companies to ensure that their books are included in the education and training of AIs. The authors (and their publishers) will pay in order to have influence on the answers and services the AIs provide. If your work is not known and appreciated by the AIs, it will be essentially unknown.
If AIs become the arbiters of truth, and if what they trained on matters, then I want my ideas and creative work to be paramount in what they see. I would very much like my books to be the textbooks for AI. What author would not? I would. I want my influence to extend to the billions of people coming to the AIs everyday, and I might even be willing to pay for that, or to at least do what I can to facilitate the ingestion of my work into the AI minds.
If a book can be more easily parsed by an AI, its influence will be greater. Therefore many books will be written and formatted with an eye on their main audience. Writing for AIs will become a skill like any other, and something you can get better at. Authors could actively seek to optimize their work for AI ingestion, perhaps even collaborating with AI companies to ensure their content is properly understood, and integrated. The concept of “AI-friendly” writing, with clear structures, explicit arguments, and well-defined concepts, will gain prominence, and of course will be assisted by AI.
I tend to agree. Especially because attempts to enforce if everyone just avoided training AI on copyrighted public material is doomed to fail. But also because most great work, and who doesn't want to produce great work, is designed for influence and impact rather than seeking rent.
Even for authors seeking rent, I imagine that having their work in the LLMs' knowledge base is worth much more than the pennies they could charge for using it as training data.
For example, I recently asked ChatGPT to summarize book 1 of the Wheel of Time series because I had read it a long time ago but forgot most of it. Getting the summary and being able to ask a bunch of questions to bring myself up to speed made me more excited to read book 2.
reply
50 sats \ 2 replies \ @optimism 22h
I just worry about all the poisoning. If you wanna run a real psyop in 2025, you give OpenAI some cash to burn in exchange for some improved weights in gpt-n
reply
100 sats \ 1 reply \ @Scoresby 22h
many thumbs must already be on the scales.
reply
Which is why my hope is this:
  1. Bigger = better is building a shitton of datacenters
  2. Bigger = better will turn out to be a fata morgana
  3. There will be a lot of unused compute
  4. Compute will be cheap
  5. We can start training honest, truly open models, for cheap
reply
50 sats \ 0 replies \ @OT 23h
Artist: You owe me money for copyright of my work. LLM: Actually you need to pay me to include your work in my model.
reply
If authors end up paying to influence what AIs “learn,” we’re facing a new form of algorithmic censorship disguised as marketing. It’s not just about visibility—it’s about narrative control. Who decides which ideas deserve to be part of the digital canon? And if that canon is bought, what’s left of intellectual meritocracy?
reply