pull down to refresh

It should come as no surprise to you that ChatGPT and other LLM subscription services may have humans reviewing your chats, but in case you wanted to see the policy...

I saw Brian Roemmele post about this on X and went to look it up for myself.
When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts. If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement. We are currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.

I really need to get my self-hosted LLama running...

Did users expect anything else?
reply
Indeed, and yet it is very easy when using one of those interfaces, to feel like it is a private conversation.
reply
I am sure even if users have it in mind in the beginning as they get comfortable talking to the AI they just naturally tell it more and more.
reply
I really need to get my self-hosted LLama running...
What do you use it for?
reply
Just general purpose research and analysis assistance, but I don't like the idea that what I'm doing now (say researching anti-money laundering laws and coinjoins or ecash legality) being later used to get me in trouble when the standards change.
reply
Maybe use ppq.ai (low volume) or venice.ai (higher volume) instead?
reply
100 sats \ 3 replies \ @Signal312 10h
When you say ppq.ai (low volume) do you mean you should only use a small volume of queries? And is that so that the source LLM can't aggregate?
If so, how does venice.ai not have the same issue?
reply
Good question.
Venice is an inference provider, so they run open source models for you. Chat is a monthly/annual subscription. Their API is pay-per-token and involves tons of shitcoinery.
PPQ is a router and token reseller where you pay per token on both chat and API. If have a low volume of tokens (i.e. you don't use it much) this can be, despite their price markup, cheaper than paying for a subscription. If however you use it a lot, it won't be cost-effective.
reply
100 sats \ 1 reply \ @Signal312 5h
Ah, I see, so you're not talking about privacy here, just cost.
One of the reasons I like ppq.ai is that you can switch easily between LLMs.
reply
34 sats \ 0 replies \ @optimism 5h
Yes, privacy is a procedure - neither does KYC of any kind so it's easy to be anon on there.
I don't mind ppq, works as advertised. It's expensive though; there's room for competition there.
reply
I used to use Venice (free version) but I found the results weren't as good as Gemini or Chat. But I probably should have given the paid Venice version a chance before going to paid version of one of the popular llms.
reply
It used to be that you could pay for venice pro with sats - not sure if that's still the case.
ppq makes you pay per query and is anon and incentivizes LN, so you can basically use gpt-5 there anon, paying per query.
reply
There's Maple
reply
I've run some of the Mistral instruct and smaller Ollama models myself, but generally rely on trymaple.ai. Can't say I've had this concern.
reply
204 sats \ 0 replies \ @anon 23h
Here's my LLM stack (including some self-hosted options):
Ollama is an open-source AI platform and framework designed to run and manage large language models (LLMs) directly on users' local machines, rather than relying on cloud-based services.
LM Studio is a free, local AI platform that enables users to run open-source large language models (LLMs) directly on their machines without relying on cloud services or incurring usage fees.
Venice.ai is a privacy-first, decentralized generative AI platform that offers text, image, and code generation using leading open-source AI models. It emphasizes user privacy by not storing any user data or conversations on centralized servers and employs end-to-end encryption and decentralized computing to ensure secure, anonymous AI interactions. Venice.ai also commits to free speech and uncensored AI, providing unfiltered responses without content moderation or censorship typically seen in mainstream AI platforms.
OpenRouter is an AI platform that acts as a centralized hub providing unified access to hundreds of large language models (LLMs) and AI providers like OpenAI, Anthropic, Claude, Gemini, and many more through a single API. It simplifies AI integration by allowing developers and businesses to switch between different AI models without needing to change their code, optimizing for factors such as cost, performance, and availability.
KiloCode is an open-source AI-powered coding assistant designed primarily as a Visual Studio Code (VS Code) extension to enhance software development productivity. It helps developers by generating code from natural language descriptions, automating repetitive tasks, debugging issues, refactoring existing code, and providing intelligent context-aware suggestions across multiple programming languages.
The "RCS-CO Prompt Methodology" is a structured prompt engineering framework designed to improve interactions with large language models (LLMs). It breaks down complex user requests into simple, repeatable, and clearly defined steps to ensure the AI produces accurate, context-appropriate, and tailored responses. This methodology helps transform AI from a casual toy into a professional, reliable tool by guiding the prompt creation process systematically.
reply
Know that you are talking to a very smart but at the same time very snitchy pal.
reply
Good catch on the policy. Privacy’s key - hope you get that Llama running soon!
reply
I have to look into this. I wonder if it also applies to paid users. Also, in EU it should be somehow a loophole around GDPR if this is the case...
reply
0 sats \ 0 replies \ @brave 17h
It’s one of those tradeoffs: safety guardrails vs user privacy. Self-hosting is definitely the only way to have full control, but most people won’t have the time/skills/resources to maintain it.
reply
for quick tasks I use duck.ai
reply