pull down to refresh

AIs usually stream their response, as in generating 1 word / sec. There should be some standard mechanism to negotiate splitting AI responses into nostr notes, say word/note or sentence/note. Something like this:
User: {kind: 9000, role: "lover", split: "sentence", prompt: "hi!"} AI: {kind: 9001, index: 0, sentence: "Hello my love!"} AI: {kind: 9001, index: 1, sentence: "How are you today?"} AI: {kind: 9002 /* over and out */, count: 2}
Good point! Some models are fast enough to generate the whole response, but defining another kind for streaming responses is great.
reply