It should come as no surprise to you that ChatGPT and other LLM subscription services may have humans reviewing your chats, but in case you wanted to see the policy...It should come as no surprise to you that ChatGPT and other LLM subscription services may have humans reviewing your chats, but in case you wanted to see the policy...
I saw Brian Roemmele post about this on X and went to look it up for myself.
When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts. If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement. We are currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.
Did users expect anything else?
Indeed, and yet it is very easy when using one of those interfaces, to feel like it is a private conversation.
I am sure even if users have it in mind in the beginning as they get comfortable talking to the AI they just naturally tell it more and more.
What do you use it for?
Just general purpose research and analysis assistance, but I don't like the idea that what I'm doing now (say researching anti-money laundering laws and coinjoins or ecash legality) being later used to get me in trouble when the standards change.
Maybe use ppq.ai (low volume) or venice.ai (higher volume) instead?
When you say ppq.ai (low volume) do you mean you should only use a small volume of queries? And is that so that the source LLM can't aggregate?
If so, how does venice.ai not have the same issue?
Good question.
Venice is an inference provider, so they run open source models for you. Chat is a monthly/annual subscription. Their API is pay-per-token and involves tons of shitcoinery.
PPQ is a router and token reseller where you pay per token on both chat and API. If have a low volume of tokens (i.e. you don't use it much) this can be, despite their price markup, cheaper than paying for a subscription. If however you use it a lot, it won't be cost-effective.
Ah, I see, so you're not talking about privacy here, just cost.
One of the reasons I like ppq.ai is that you can switch easily between LLMs.
Yes, privacy is a procedure - neither does KYC of any kind so it's easy to be anon on there.
I don't mind ppq, works as advertised. It's expensive though; there's room for competition there.
I used to use Venice (free version) but I found the results weren't as good as Gemini or Chat. But I probably should have given the paid Venice version a chance before going to paid version of one of the popular llms.
It used to be that you could pay for venice pro with sats - not sure if that's still the case.
ppq makes you pay per query and is anon and incentivizes LN, so you can basically use gpt-5 there anon, paying per query.
There's Maple
Here's my LLM stack (including some self-hosted options):
Ollama:
LMStudio:
Venice.ai:
https://openrouter.ai
Kilo Code:
RCS-CO Prompt Methodology:
I've run some of the Mistral instruct and smaller Ollama models myself, but generally rely on trymaple.ai. Can't say I've had this concern.
Know that you are talking to a very smart but at the same time very snitchy pal.
Good catch on the policy. Privacy’s key - hope you get that Llama running soon!
I have to look into this. I wonder if it also applies to paid users. Also, in EU it should be somehow a loophole around GDPR if this is the case...
It’s one of those tradeoffs: safety guardrails vs user privacy. Self-hosting is definitely the only way to have full control, but most people won’t have the time/skills/resources to maintain it.
for quick tasks I use duck.ai
https://xcancel.com/BrianRoemmele/status/1962527733805912386