pull down to refresh
100 sats \ 1 reply \ @k00b 30 Apr \ on: PayPerQ (PPQ.AI) AI chatbot now has Dalle3, GPT Vision, Claude3, and more! bitcoin
I like the model selection a lot.
I'd recommend unifying the pricing across models and queries as much as you can. The anxiety around price variability, lack of control over the model's response, and the choice the prompter has to optimize it creates a lot of friction. Friction isn't always bad, but I don't think you want it here.
I'm not sure what the best way to do this would be, but maybe just charge people as if it were on the most expensive model with the average number of tokens. That way I'd know exactly what it costs me every time I hit 'send'.
Thanks! The model selection is awesome but definitely getting to be overwhelming for the normal user, and we do need to convey pricing better somehow.
I've thought a lot about setting a fixed price but it gets tricky as then some power users start running queries which demand a lot of compute, and that pushes the fixed price even higher. Some platforms compensate for this by taking shortcuts behind the scenes that users don't see, but ultimately it leads to kneecapped AI outputs.
The highest quartile of our users still don't spend half of what a subscription costs on average, but that doesn't change the fact that people might still have the pricing anxiety that you are speaking of. So it's something I definitely need to ponder on more.
Appreciate the feedback.
reply