Good morning Bitcoiners!

Unfortunately, current AI platforms go against key Bitcoin values (see below). We recently launched the Beta of Sats4AI to rectify that and become Bitcoiners' favorite AI platform.

Privacy

ChatGPT has a free tier, but as the saying goes "If it's free, you are the product". In Bitcoin culture, we could also say "There is no such thing as a free lunch". Your prompts are linked to your account, saved, and used by OpenAI. While this data is not (yet?) sold for Ads, it's a privacy nightmare, with internal mess-ups:
And a major honeypot:
Sats4AI doesn't have accounts to link prompts to and stores absolutely no data

Unmanipulated

Closed AI platforms are black boxes. Is the answer I am getting coming from the actual AI model or was it hard-coded based on the company's interests or suggested by the Govt? You can't really know. Here are obvious censorship examples though:
Sats4AI uses Open-Source models, while they still have some biases based on their training data, they are a lot more neutral.

Permissionless

ClosedAI platforms act as gatekeepers, deciding who gets to use it and who doesn't. - They don't like how you are using it? You are out : OpenAI suspends ByteDance’s account . - Sometimes they just don't feel like having new users: OpenAI Pauses ChatGPT Plus Subscriptions for New Users. - Born in the wrong country? You are out! Sam Altman (OpenAI's CEO) : "There are countries where we decide not to do business" (WEF 2024 Interview with Ina Fried). In addition, although no fault of their own, ChatGPT is currently banned in 15 countries for a total population of about 1.8 Billion.
Sats4AI doesn't discriminate and is available worldwide to anyone with Sats

Affordable

Text
  • Free is not really free as described in the Privacy section above.
  • ChatGPT+ is $20/month (+fee to convert to your currency if outside US) on a credit card (no credit card, no soup for you).
  • Sats4AI: 21 Sats / Prompt :
    • High usage: 10 prompts/day x 30 days = 6300 Sats / Month = Around $4
    • Medium usage: 5 prompts/day x 30 days = 3150 Sats / Month = Around $2
    • Low usage: 15 prompts / month = 315 Sats / Month = Around $0.20
Images
  • Midjourney is $10/month (+fee to convert to your currency if outside US) on a credit card (no credit card, no soup for you).
  • Sats4AI: 100 Sats / image :
    • High usage: 2 images /day x 30 days = 6000 Sats / Month = Around $4
    • Medium usage: 1 image /day x 30 days = 3000 Sats / Month = Around $2
    • Low usage: 5 images / month = 500 Sats / Month = Around $0.35

This Is still in BETA for now but please try it and provide the shrewdest feedback so we can improve!

Feel free to reach out for any feedback/comments / questions on X/Twitter or NOSTR: npub1rs2ytg24ayt05c6rcxxqv07tz8gdmwt9ukzvjt7ts8x5twync4ushuc4u6
Not my sats. I will never use and pay a so called "AI" that in fact is not intelligent at all. Is just a shatGPT that have nothing to do with intelligence.
reply
I understand. The name "AI" creates certain expectations that are not always met at this point. I still see a lot of value in the current tools, most notably when looking to merge multiple well-documented concepts and sources. Ex:
  1. Who was the mayor of Paris during WW2?
  2. Create an itinerary for 2 young people budget-conscious in Rome for 2 days, travelling with a Vespa and staying at hotel "X".
  3. Create a list of all the "rare" languages spoken in Africa, including for each an approximation of how many people speak that language and in which country
"Search" would not give you a direct answer to any of those questions. It would require looking at multiple websites and taking notes. Obtaining such an answer almost instantly for 21 sats is, to me, incredible value at this point. I think we just need to keep playing with it until we find which use cases is it good for.
reply
Read books and load your memory with as much info you can. That will never go away from your mind. This dependency of asking a shatGPT what to do will destroy humanity and will create only IDIOTS
reply
reply
200 sats \ 1 reply \ @OT 21 Mar
I'll definitely give it a go. But I think the image usage might be estimated quite low. If I'm after a particular image I might need to keep updating the prompt to get what I'm after.
reply
Thanks! Agreed that expectations need to be tempered around text to image. You need to be extremely descriptive about what you want and likely will need several attempts, but eventually you will get a brand new image free of copyrights that you can use at will in presentations, websites, social-media, etc...
reply
200 sats \ 1 reply \ @davidw 20 Mar
Can see a bunch of PoW went into this, so kudos! 🚀
Can you talk a bit about all of these models you are currently using? How did you arrive at your current list?
  • Text = Mixtral 8x7B-Instruct
  • Audio = tortoise-tts model
  • Image = Stable Diffusion XL
  • ‘Vision’ = LLaVA-13b
reply
Thank you ! PoW is the only way!
We are constantly monitoring for the latest and greatest models. If a new model performs better, we deploy it! We are also tracking new things (Vision is a good example) and if we see value in it, we put it on the platform.
But to answer your question specifically:
Text:
  • Mixtral 8x7B is currently the best Open-Source model based on our internal tests as well as multiple benchmarks. It's also very efficient which is what always us to charge only 21 sats per prompt.
  • We also offer a "Code" Model called Call-Llama2 70b, which can produce better results than GPT-4 on this specific task.
  • We are looking into adding another totally uncensored model, where no subject/topics are off-limits.
Image:
  • Stable Diffusion is for now the best Open-Source image model. Other models exist that could be cheaper, but given the performance and limits (see above comment) of the best model, we don't think they actually bring a lot of value.
Vision
  • LLaVA is an incredible model that came out just a few days after GPT-Vision and is the best we tested so far. Multimodal is key to unlocking new use cases to LLM.
Audio:
  • This is more of a "toy" model. It's fun to try but is still very limited in its current form. We just wanted to put it out there so people can see another "side" of AI.
reply
Thanks, but speaking of gatekeepers, I prefer to run these models on my own hardware.
Besides, current-gen "AI" is complete garbage and "intelligence" in name only (about as much of it as in a Quake bot).
I refuse to accept that intelligence amounts to iteratively finding the best next word to utter. The whole current approach is a dead end and we're not even close to AGI.
reply
Open-Source models are key and if you have the appropriate hardware, know-how and time to run it yourself, this is the best approach. Most people, however, don't at this point and this is what we are looking to address.
Agreed with you about "intelligence". I still see value in what it can do at this point, copying here my response to a similar comment above:
"I still see a lot of value in the current tools, most notably when looking to merge multiple well-documented concepts and sources. Ex: Who was the mayor of Paris during WW2? Create an itinerary for 2 young people budget-conscious in Rome for 2 days, travelling with a Vespa and staying at hotel "X". Create a list of all the "rare" languages spoken in Africa, including for each an approximation of how many people speak that language and in which country "Search" would not give you a direct answer to any of those questions. It would require looking at multiple websites and taking notes. Obtaining such an answer almost instantly for 21 sats is, to me, incredible value at this point. I think we just need to keep playing with it until we find which use cases is it good for."
reply
I mean if it ain’t broke…
reply