pull down to refresh

Hey guys, I made an intro post a couple of months ago, but I just wanted to reintroduce the project I've been working on and let you know the amount of progress we've made in the last few months:
To start, PayPerQ (ppq.ai) is a "premium" AI Chatbot interface which operates on a pay-per-usage model via lightning payments:
You simply ask a question, make a deposit of credits, and get answers. $0.25 will currently buy you about 10 queries from GPT4 Turbo, for example.
There are a nice handful of text generation models to choose from, including premium close sourced models like GPT4, GPT4 Turbo, Claude3 Opus, Sonnet, and Haiku, as well as some really good and cheap open source-ish models like Mixtral 8x7B and Meta Llama 3.
In addition to text generation, we now have image generation via DALL·E 2 and 3 (Stable diffusion and others are coming soon!)
We also have GPT Vision now, which allows you to drag and drop images into GPT4 -Turbo and have it interpret the images for you. This is really handy to have sometimes!
All of your account usage is available to see in the "Account Activity" section:
And you even have light and dark mode as well as the abiity to set your context window in the settings:
PayPerQ is "accountless", and you don't have to give your name, email, or connect a credit card to use it!
Overall, I feel PPQ is up to a level now where it is just as good or better than having a subscription, and in 95% of cases, it ends up being cheaper than a subscription as well. This is one of the first products where lightning can actually save someone money, and thus, I feel like it can actually start drawing people into lightning where before they were on the fence. In addition to savings, one of the larger user personas we have so far are folks who live in parts of the world where connecting a bank card is very difficult due to capital controls and other banking infrastructure problems. We are solving a big problem for them!
We've got 3 people working on this now and progress is only accelerating. Please try it out, tell your friends, and give us feedback. I feel this can become one of the most popular daily use cases of lightning to date and can push the ecosystem forward by a big way.
One last thing, we've created a nostr! Please follow us at npub16g4umvwj2pduqc8kt2rv6heq2vhvtulyrsr2a20d4suldwnkl4hquekv4h
Cool use of Lightning, nice work!
reply
Thank you!
reply
Great service, the results are pretty snappy.
reply
200 sats \ 1 reply \ @nullama 30 Apr
I like this.
Services like this are great, thanks for making it.
reply
Thanks for the nice comment. :)
reply
120 sats \ 1 reply \ @Signal312 1 May
I absolutely love this tool! Awesome work. I've been looking for something like this, account-less, pay with sats, looks really useful.
I'm newish to this area, but very interested, and it wasn't hard to use. It seems quite solid. I have a couple comments.
  • I would make the URL extremely clear. I think a lot of people, when they see PPQ.AI, don't think it's a link because it doesn't end in .com. Make it super clear, say something like 'Click here to go to https://ppq.ai/". When I first looked at your other stacker.news post, I didn't even see the url after looking for it, so I searched online for payperq, and it showed up.
  • At the main screen, "MODEL", - What is model really? It's not clear unless you know what you're doing. Why choose one over the other? Perhaps put in as an FAQ? For instance, is one of the models really politically correct, and the others less so? One is for creating images?
  • What are the chances that you can put some of the other models in here? Like for instance, Alex Epstein created https://energytalkingpoints.com/. I've heard that there's a model trained on his writings as a base, so you can ask a question, and get the answer that Alex Epstein would have given. Can you include these types of models?
  • You can create a bunch of New Chats (I wasn't quite sure what they were, so I did). But then you can't delete them. The garbage can, for delete, doesn't seem to work.
  • The edit for the new chat doesn't work right. I can edit the title, but then when I click away from it, it still says "new conversation". But then when I click on it again, it gives me the edited title.
  • In the FAQ you say "Can I give PayPerQ my own prompts?" Then you have some info there. Maybe make more clear about what the "prompt" is, and what the "question" is. Maybe define them separately. I'm new to this, and it's confusing. Seems like "prompt" is specific directions about the tone/guidelines about how you want it to answer (like "make it concise"), vs the actual question you're asking. But from what I've read, the word prompt usually refers to the question itself?
  • When creating a prompt, you have some text for "prompt content", with the "use {{}} to denote a variable". That's confusing. Maybe a have a link to "see examples" on a new page and give a bunch of good ones.
  • In a chat window that already has some text in it, it looks like it doesn't auto-scroll down so you can see what's being generated, as it's being written. The only clue that something is happening is that the "stop generating" button shows up. It's confusing, it should scroll so you can see that latest that's being generated, as it's being generated. (note - this does not appear to happen all the time. Just now I did another question, and it did show, as it was being generated. But then I created an image, and after the intial image, the new images didn't show until I scrolled down).
  • I did a bunch of queries, then clicked on Account Info. It shows "-44". So, I went over what I initially deposited, is that correct? You don't stop people from querying, once they have a negative account balance?
reply
These are great, great notes told from my target customer perspective, the "AI newby".
Regarding Alex Epstein idea, yes this is very possible and something we are ideating on how to implement perfectly.
Thanks for the bug and UX reports about new chats, deletion, etc had no idea about those.
Prompts are indeed confusing! Example page would be great just need to get to it. I would like first build in some detection for when people are actually trying to use it. You are the first that has even bothered to talk about it.
Yes, we do allow you to go negative, but only for one query. After that you will need to pay up again! We will probably refine it so that you can't go negative at all in the future. It was just kind of an easy way to implement because the cost of each query is so variable we don't really know if your next query is going to cause you to go neg.
Thank you again for these amazing and passionate notes! Please let us know if/when you have other feedback. Follow me on twitter @mattahlborg as that is where I'm most active if you want to connect further!
reply
reply
Yea, this is pretty much how I built the website! :D
reply
reply
I appreciated your article breaking down the differences between lightning wallets:
reply
100 sats \ 0 replies \ @Tony 30 Apr
You’ll like this one as well then: https://21ideas.org/en/lightning-wallets/
reply
That's amazing stuff!
I would love something as you mentioned 'accountless'.
Best of luck for your project. Keep us informed at least monthly.
reply
Great job!
reply
100 sats \ 1 reply \ @k00b 30 Apr
I like the model selection a lot.
I'd recommend unifying the pricing across models and queries as much as you can. The anxiety around price variability, lack of control over the model's response, and the choice the prompter has to optimize it creates a lot of friction. Friction isn't always bad, but I don't think you want it here.
I'm not sure what the best way to do this would be, but maybe just charge people as if it were on the most expensive model with the average number of tokens. That way I'd know exactly what it costs me every time I hit 'send'.
reply
Thanks! The model selection is awesome but definitely getting to be overwhelming for the normal user, and we do need to convey pricing better somehow.
I've thought a lot about setting a fixed price but it gets tricky as then some power users start running queries which demand a lot of compute, and that pushes the fixed price even higher. Some platforms compensate for this by taking shortcuts behind the scenes that users don't see, but ultimately it leads to kneecapped AI outputs.
The highest quartile of our users still don't spend half of what a subscription costs on average, but that doesn't change the fact that people might still have the pricing anxiety that you are speaking of. So it's something I definitely need to ponder on more.
Appreciate the feedback.
reply
100 sats \ 7 replies \ @kevin 30 Apr
I like the idea, not the pricing. Way too high, not at all competitive to the raw API prices. I'd be willing to pay a premium just not like 1000x or whatever this is :)
reply
I think you may be misunderstanding the pricing? The default invoice of 25 cents pays for many queries, not just one. The margin is actually very low currently compared to API.
Was this possibly what happened? I think maybe we need to make the UX much more clear on what 25 cents gets you because some other users have also voiced this. Let me know please.
reply
No I understand the pricing very well. Your margins obviously depend a lot on the underlying model and the prompt.
Opus, GPT-4 etc are on the higher end and I can maybe see it being an OK margin there.
But you can find llama3-70b APIs out there for like $0.20/million tokens.
reply
We use openrouter as our supplier for Llama3-70B and their price is $0.27/M and we tack on a margin after that and round up to the nearest sat. Ultimately though llama3 queries our platform rarely exceed a few sats (2/10ths of a penny) so they aren't exactly breaking the bank lol.
I guess if you are plugging in hundreds of thousands of tokens of context it starts to matter but for 99% of normal users this is incredibly cheap for value you are getting from AI.
reply
Ah, I think I figured out the confusion. You saw the "25 cents pays for 8-10 queries" and thought that applied to Meta Llama 3. The 8-10 is in reference to GPT4 Turbo. You will get like over 100 queries with Meta Llama 3.
Yea we need to do better explaining these things to people.
reply
100 sats \ 2 replies \ @kevin 30 Apr
Ok, then it makes a lot more sense. I did select llama3 from the dropdown and still got $0.25. If it's different then that should be reflected in the UI.
reply
If you select Llama from the dropdown and run some queries, you will see in the "Account Activity" section the actual price you paid. Should be 1-3 sats usually. The 25 cents is just a one time deposit to buy a bunch of credits. After that payment you are then drawing down upon that 25 cents over time. You can set it to 5 cents too if you want and that should still buy you quite a few Llama queries.
reply
I understand your flow now. You came to website and changed from the default model of gpt4 turbo to llama before you submitted your first query. Then the payment modal came up with the 8-10 queries sentence which seemed expensive to you.
We will definitely be revamping the initial payment modal because it is very confusing to a lot of people.
Thanks for working through this with me.
I tried this on mobile and experience was somewhat bad plus it stole my sats.
Unleashed.chat has worked better for me
reply
Yea I'm sorry about that. The mobile is indeed having some hard to pin down issues and we are looking at it. I know you didn't ask, but let me know if you want your sats back.
Also, if you want to do us a favor and walk us through exactly what happened in our Telegram it would help us a lot. One of the problems with "accountless" is its hard to get feedback from customers when things go wrong. They just kinda silently leave.
reply
It’s fine not the first sat I lost and I am sure it won’t be the last.
I wrote about it here:
I can’t remember the exact UI flow but I just remembered being annoyed on how hard it was to pay on mobile and once I did I got no results and the site wanted me to keep paying