pull down to refresh

The focus on addictive products shows their moral compass is off
The more I learn, the more I feel this becomes the fight I need to pick.
AI companies, like social media companies did before, are focused on increasing the number of monthly active users, the average session duration, etc. Those metrics, apparently inoffensive, lead to the same instrumental goal: to make the product maximally engaging.
There are two similarities to FB that I expect to apply to the large chatbot services, in particular OpenAI/Anthropic for whom this is their main product, but at a much larger scale and to a much larger effect:
  1. The people involved will become extremely rich off of user addiction
  2. As a long-term outcome, a significant subset of the users will have at some point regrets and another set will stay oblivious for a very long time and be harvested cycle after cycle. Two problems that will make the impact of the regrets much larger with chatbots:
    • It actually "fixes" something, though in very bad quality, so people become dependent beyond the addiction
    • It has significantly higher detrimental effect on cognitive skills and behavior; nowadays, documentaries and news items about detoxing from social media are a thing and the process is portrayed as hard. But, this will be nothing compared to detoxing chatbots.