I get that Maple allows private usage and is E2EE which is nice. But the available models are only open source which are not as powerful as the closed platforms such as ChatGPT 5.2.
If it was uncensored that would be OK, I guess--a trade off--but I tried to bring up a controversial topic and I got lectured as if it was my annual mandatory diversity training.
Then there is the cost--18usd/month? Is that not the same as OpenAI?
So let me get this straight, I pay the same, get an inferior model, and get the same censorship, but the CIA can't easily see my search query for how many eggs are healthy to eat in a day? Is that the deal?
If privacy is such an issue should you not just spin up your own LLM locally?
It seems for those just looking to get AI for corn ppq.ai is by far the superior option. Has the closed models available and allows paying with corn only for the searches you need (on demand, no subscription). But, yeh, the CIA learns all about your fascination with the religion and diet of the Roman Republic. Probably an OK trade off?
Am I missing something? I know the Maple guys are posting here so would appreciate correction if I'm off base with any of this.
According to current LMArena rankings,
glm-4.7matchesgpt-5.2overall (-1 on ELO, 1444 vs 1443, or 0.06% less appreciated.) But this just means that we should probably stop comparing with OpenAI models - they messed up for 3 releases in a row now and are surviving on narrative, not merit (with the exception of math stuffs, because no one else focuses on that.) Anthropic (3 rounds), Google (1 round because they have a more decent release cycle) and xAI (1 round) have continuously beat gpt performance since August; ChatGPT is quickly becoming the MySpace of AI.That's the dividing line. For me it is not ok, for you it may be.
If you were to spin up your own llm locally and report back, I'd be very keen on hearing how you did it and what the outcome is. I'd definitely zap it.
I hear that goose is a good place to start.
I did it ages back when the boom first started and the Facebook weights were "leaked" or whatever. I downloaded a 7gb file and used some CLI tool to interact with it. It was completely uncensored and very funny to make it say controversial things. But for me personally I want access via a service I can pay in corn for. So ppq ai is great and if there was a way to use an uncucked open model with Maple then that would be great too.
I don’t input personal, private things into the cloud models you’re comparing them to. I could run a decent local model to do the same thing but there’s a large upfront cost in terms of time and money.
Another way to view this is that if you’re not technical (in the unwilling and helpless way) you don’t have much of a prayer of getting a local model running.
Thanks. I get that. If their model wasn't so censored then it would be a great trade off for people without the time/skills/resources to set up a model locally. But with the censorship it feels like you are paying the same to get a hobbled ChatGPT (albeit with some privacy advantages)
Knowing the founders, I don’t think they want to models to be censored so perhaps that’ll change at some point.
I think you do get it. Maple.ai isn't for everyone and you wrote some good reasons why. I'm sure for some it is worth it because they don't have the time or knowledge to spin up their own LLM. The real question is do they have a big enough market to sustain what they are doing. Regardless, the work they are doing is interesting and I wish them well. If not enough people want it... they will fail. Markets work.
Thanks. I just don't get why they've set their censorship level to mid 40s Progressive Portland Wine Aunt. Seems like a missed opportunity.
For private usage, I use Ask Brave for free.
https://search.brave.com/ask
It uses:
Qwen 14B
Llama 3.1 8B
Claude 3.5 Haiku
Gemma 12B
All models are hosted on Brave’s servers, under claims of no logs, no cloud storage of chats, and no use of data for training.
Maple AI's E2EE model is interesting! But it seems its main relevance is for secure chat storage.
Yeah l was out on Maple
The monthly fiat payment was a no go for me.
Ppq.ai meets all my needs. Pay as I go pay in sats
The open models aren't very good. Deeepseek boasts a lot, but that Chinese model is trained on all the woke-ass fake news we all hate. It made me realize that the future through the lens of AI will be whoever is the loudest and most consistent in text.
Isn't the hardware requirement to spin up your local model pretty significant?
A new Macbook Pro gives you most of what you'd need to have acceptable-ish performance. The more RAM, the nicer models you can run, though honestly it's a little slower than having a dedicated Blackwell box (but much, much cheaper)
ppq.ai looks useful. Thanks
Ok, after noodling on this for some time I can see the value of Maple. I think it works well as a kind of private councilor. You can set up different chats where you prompt the AI to have different personalities, allowing you to get feedback from several different perspectives. And everything is private so you can discuss your innermost thoughts to your heart's desire.
Now, I still think it would be great if the censorship was toned down. It would be great to have one chat that was mainstream/ woke, one that was a "conspiracy theorist", one that was a classic 90s liberal, one based, and bounce ideas of all of them to get a multi perspective view on an issue. Right now the "conspiracy theorist" is not allowed, at least for several controversial subjects. And it feels off brand, because if these chats are private why the need to keep talking points PC? Although, I get the founders don't want to be liable for their product (being blamed for) "radicalizing" someone who goes on to commit some kind of crime as a result. Which I guess is a legitimate concern.
privacy matters. maple and ppq and routstr are just competative implementations of private cloud AI. as they should be, there is a demand for them. but yea, I would agree with the notion that you should set up your own local LLM if you really care about privacy. fine tuning/RAGing your own model on open weights is, at least the way i understand it, the best path towards sovereign AI.