pull down to refresh
100 sats \ 3 replies \ @Signal312 21h \ parent \ on: OpenAI employees review your chats and may refer to law enforcement AI
When you say ppq.ai (low volume) do you mean you should only use a small volume of queries? And is that so that the source LLM can't aggregate?
If so, how does venice.ai not have the same issue?
Good question.
Venice is an inference provider, so they run open source models for you. Chat is a monthly/annual subscription. Their API is pay-per-token and involves tons of shitcoinery.
PPQ is a router and token reseller where you pay per token on both chat and API. If have a low volume of tokens (i.e. you don't use it much) this can be, despite their price markup, cheaper than paying for a subscription. If however you use it a lot, it won't be cost-effective.
reply
Ah, I see, so you're not talking about privacy here, just cost.
One of the reasons I like ppq.ai is that you can switch easily between LLMs.