Hey all, I’m really excited to introduce PayPerQ (ppq.ai), a vanilla ChatGPT4 experience which operates on a pay-per-query model via lightning payments:
As I see it, the primary use cases are:
  1. Making GPT4 available to users who may not want to pay for a full $20 ChatGPT Plus subscription,
  2. Making GPT4 available to global users who may find it difficult to connect a VISA/Mastercard to OpenAI.
As an example, you could run 10-20 queries a day through ppa.ai and still only spend ~$10 per month, and without having the burden of linking a credit card!
Overall, I believe these two use cases could be really strong and that this bot could become the default GPT4 experience for many Bitcoin adjacent global developers and professionals around the world.
In terms of how the payments actually work, users have two options:
  1. Deposit a lump sum credit and draw on that credit over time,
  2. Use realtime streaming payments directly from your Alby wallet (soon to be all NWC wallets).
I know that PayPerQ currently is not as feature rich as a ChatGPT Plus subscription, but as a developer, 95% of my usage currently is just using regular GPT4. For that reason, I’ve been focusing on making the GPT4 experience as polished as possible. Good layout and readability, consistent functionality, and a quick and easy payment flow have been my priorities so far.
If you are a regular user of GPT4 and you take some time out of your day to try this out, please come back to me and answer this question:
"What is stopping ppq.ai from becoming your default ChatGPT experience?"
If you could let me know your answer to this it would greatly help me in making improvements.
Transparency:
ppq.ai does not know your name or email.
ppq.ai does know your IP address and user agent string.
ppq.ai does not store the content of your queries or the content of the output from openAI. Your content is sent through my server and onto openAI, but it is not stored on my server. All of your conversation content data is stored in your own browser history in localstorage.
ppq.ai does store the number of input tokens your query has as well as the number of output tokens that openAI generated. I do this so that I can configure pricing appropriately.
ppq.ai has to run every query through openAI’s moderation API. If I didn’t do this, a user querying for pedo stuff, bombmaking stuff, etc could cause openAI to shut off ppq.ai’s entire account, ruining it for everyone.
Request for help:
I would like to build this into a business, but I am not a seasoned developer by trade. If you are also passionate about this project and experienced in typescript, nextjs, and express, please reach out to me!
My thoughts on ppq.ai’s impact on lightning adoption:
While the premise of this app may seem basic, I believe that, if executed well, it may become the first major app to really spur lightning adoption beyond hobbyists. Why? Because it's actually solving real problems by saving people money and giving them access they might not otherwise have.
PayPerQ represents one of first practical pay-per-use models out there and may be help us all get away from the many monthly subscriptions we have for data storage, entertainment, news content, and more.

Please try it out and give feedback! (ppq.ai)

Nice work. Any plans / desires to reduce the reliance on OpenAI?
reply
Definitely. I plan to open up to many models, both proprietary and open source. For now, I wanted to focus on GPT4 because it is the highest quality from a working dev's perspective.
reply
Could you expound on why you think ChatGPT 4 is superior to Gemini (used to be Bard) and Microsoft Copilot?
reply
I don't really have a great answer. I haven't extensively tested them out but it was just my general experience and understanding. Would you use this more if the other models were available?
reply
This is awesome. I have long thought that lightning would usher in an era of streaming software as a service and replace license fees in some cases. This is essentially the AI agent version of that. I don't sub to chat gpt 4 but I will use this.
reply
added $1 credit and prompted a response. How do I know how much I paid for that response?
reply
Click on the account ID button. Sorry it doesn't look like a button yet.
reply
Gotcha. thanks.
reply
101 sats \ 0 replies \ @kr 28 Feb
love to see it! ⚡️
reply
Congrats! It's very nice. Maybe LN or Nostr login in the future?
reply
Could you explain the advantage of this though? I'm not really understanding what the user gains by using this method?
Currently the users don't even have to log into anything at all.
reply
This is awesome! Trying it out asap
reply
Let me know how it goes!
reply
Cool idea.
reply
Thanks for sharing this. I've bookmarked it
reply
0 sats \ 1 reply \ @o 4 Mar
Can you provide it as an API Endpoint please?
reply
That's an interesting thought. If you want to message me your use case I'd be happy to ideate on it a bit more. @mattius459 on telegram.
reply
I would def use but gpt turbo isn’t as good as gpt 4. Please add an gpt 4 option
reply
I've added the regular GPT4 option. Let me know how it goes!
reply
Really? I hadn't heard that. Yea adding GPT4 is pretty easy but quite expensive for the user. You'd be willing to pay 8-9 cents per query?
reply
100 sats \ 1 reply \ @jonk 5 Mar
I would love to have the option to use the expensive option on demand. Maybe default to turbo and add the expensive alternative with price etc?
reply
Hey there, I've added it! Let me know how it goes!
reply
ppq.ai has to run every query through openAI’s moderation API
So it's woke? Hopefully that can change soon
reply
Yea unfortunately. Best we can do is support more open models when we can and let the market take care of it. For now though, GPT4 is superior for most work related tasks. You don't really see the woke much there.
reply
As an example, you could run 10-20 queries a day through ppa.ai and still only spend ~$10 per month, and without having the burden of linking a credit card!
I really like the idea but man, that is a ludicrous pricing. What is that? Like 1000x the OpenAI pricing?
reply
I think you must be misunderstanding somewhere. Some guesses as to your misunderstanding are:
  • This is a GPT4 bot, not 3.5.
  • The default invoice shown on the website buys you MANY queries, not just one. Average query should be about 2-3 cents.
I honestly don't know my margins yet but I think they are at around cost right now. So again, I think you must be misunderstanding something.
reply
I'm a fan. Paid.
The warning "saving to local storage" is alarming right after I made my payment
How do I check my account balance..?
Anyhow, I'll give it a whirl and consider cancelling my ChatGPT sub in favor of PPQ
reply
Click on account ID to check your balance. Sorry it doesn't look like button yet.
reply
Thanks.
After several queries, based on my use of ChatGPT I think this would cut my monthly spend by 75%
Give me a way to save my credits pls 🙏
reply
Hey Evan, your credit id is now stored in the payment invoice and you can redeem it using a new modal that I created. Please let me know how it works.
reply
Woooo good to hear!
Can you explain your request a bit more?
The credit will persist in your localstorage for when you come next time... unless your browser deletes it.
Are you saying you want to be able to save your credit ID in case your browser deletes your localstorage so that you can come back and input it later to get the balance back?
reply
deleted by author
reply
0 sats \ 1 reply \ @OT 28 Feb
That's nice that you don't need to sign up. I'd imagine if it gets traction openai would try to shut that down though.
reply
That's nice that you don't need to sign up. I'd imagine if it gets traction openai would try to shut that down though.
I too am worried about such an event but I read through their ToS and they do encourage accounts but didn't have it as a requirement. Every query on my bot goes through their moderation API so I don't see what could make them unhappy.
reply
Nice. How do I view my credit balance?
reply
Click on account ID on the left
reply
Thanks! It would be more seamless if the balance was always shown down there.
reply
Yea I'm trying not to make the balance right on the main page as I don't want cost to be something on the mind of people in their general day to day work. But I will make the account balance page much more obviously clickable so they understand where it is and don't get lost like you did.
reply
A real bitcoiner would open source this... especially if you forked a project...
reply
Good work!!!
I'm building the same thing!!
This is the future.
reply
Perhaps join forces? :)
reply
❤️‍🔥 that'd be cool. I am also not a dev but i code python. Did you fork ollama-ui?
Im using streamlit. Right now its a personal project to allow my un-googled grapheneOS phone to have an AI app. Works very well. Just added LN invoices so i can give accounts to my friends and they can essentially reimburse me for their API use. Runs on my own server so i cam run models for free locally. It likely wont scale past a dozen users tho... esp with running local models. At least how its architected now.
reply
This is pointless. You can already self-host your own open source LLMs on your own machine using LM Studio.
reply
I've dabbled with open source models, but have yet to see one that exceeds the quality of ChatGPT3.5. But I get that things are changing fast. Am I wrong?
reply
100 sats \ 1 reply \ @doofus 28 Feb
I agree and unless you have a beast computer, the local LLMs are so slow
reply
I'm using Mistral 7B 8-bit and it's really fast, my PC is average. The quality might not exceed ChatGPT3.5, but it's pretty decent if you ask me. For most people it's enough, and it's free and offline.
reply
You can't self-host GPT-4. You can't do pay-for-what-you-use without KYC.
reply
Mistral 7B 8-bit is pretty decent for most promps. Try it out and let me know what you think. Meta is also open-sourcing their LLama models.
reply
shun the non-believer!
🦄 shuunnnnnnn
reply
reply
Yea, this is pretty much how I made it!
reply
Made me laugh. Almost zapped. Saw it was darth. Go away darth.
reply